 So hi, I'm Omar. I will talk about how to craft ML demos with Python Just quick question. Do you know what Hogan phase is? Yeah, okay. Do you know what radio is? Right. No worries. So have you seen something like this in the last couple of weeks? So this is Dali mini Dali mini is a public space a public demo that anyone can try out directly in the web It allows you to write something in this case It says the Timor Gorgon from Stranger Things holding a basketball and it will generate images and this is a Open source public free model. This is not the open AI Dali version And this is a demo created just with a couple of lines of code using radio, which is an open source tool I will talk a bit more about it in a couple of minutes And this was launched or this got viral in early June Since then it has had over 70 million views in just the last month, which is quite wild But there are many many other demos are over 5,000 public open source spaces in the last Nine months. This one is called informative drawings It allows any user to upload any picture and it will give you some nice Drawings or lines around it and this one is a bit a wilder It gives you some object and it allows you to write something like an explanation Person a man a woman Wearing a hat with a shirt with a skirt and it will draw an image of a person based on that so What is this about? Demos and it is about making machine learning more accessible and more public and more open for everyone So these pictures are from a conference called CBPR CBPR is one of the top computer vision conference conferences in the ecosystem and Few years ago people were creating papers They were publishing papers, but they were not creating any demos or any way for people to try out the model models directly So in CBPR this year, it was a bit different quite a bit of people were sharing a Demos so that meant that people could go there and or open their phones or open their computers and directly try out the models in their browsers so without having to run any code without having to know Too much about machine learning without having the resources to train any of these models They were anyone even if they were not from a computer science or software engineering background They could go and try out This these models thanks to these demos And it was very very different three years ago So this was a spreadsheet of how it was on before which has all of the paper Titles from one of these conferences the authors As you can see many of them did not have the code available And she's like this and today we have Things in a very different way. So for example, this one is for a CBPR And now people are open sourcing not just the code to train the models But they are also open sourcing the model weights the data sets that were used to train these models And also public demos for anyone to try out. So for example here people can Go for the CBPR, which is this computer vision conference. I was talking before and they can find which papers have a Hocking face space or a model Directly related and I will talk a bit more about this in a couple of minutes, but I just wanted to Give this comparison of how it was done before and how it's done now So why demos so first it allows to easily present a wider audience So let's say that you create a very nice demo about some model About language for example about Spanish, but then you only have wide male computer scientists Traying out the model. So of course you will not identify a bunch of biases that the model will have So thanks to this you can really have anyone and that really means anyone just open their browser and try You know these models and then with flagging you can identify lots of biases that the models might have Second is that it allows reproducing reproducibility in research. So what is reproducibility? It means that these results that are shared in your paper Can be replicated by others or reproduced by others and that means that anyone can go there Test with their own image and see that the model actually works and it's not just some cherry-picked result by some researchers And finally it allows to identify and debug different failure points so for example Maybe the certain model was not working for certain type of handwriting for example And that's a nice way to identify it and these things are only Identifiable if you really have a wider audience a wider diverse set of people trying out these models So we're really at a turning point in usage of machine learning until a couple of years ago even two years ago Just the people that knew about software engineering just the people that knew about machine learning or the people that knew Actually how to run the code we're able to try out these models now anyone that has a browser can go up and Try out these models, which is quite interesting and previously let's say you spend a Six months training some fancy models and then you want to build a web app So then you realize that you need to learn JavaScript a flask docker CSS And then you might say okay. I won't do it building a demo will be quite complicated But now a building a demo is easier than it looks So as I was mentioning before Previously you had to use a bunch of different languages a bunch of different tools So first you train your model and for that you might just tensorflow like it learned by torch, which is in Python Then you might use flask and docker Then you might want to use SQL and then at the end the interactive interface previously It was always with JavaScript HTML CSS and front-end technologies and I heard that the Python communities don't like Sorry CSS and JavaScript too much. So of course they replaced everything with Python. So great. Yo is a open source tool It's a open source library, which allows people to build a demos with 10 lines of code. So it's really a Quite simple. So even if you don't know anything about machine learning. Don't worry You can use greater to build some nice interactive interfaces And I just want to show a couple of other demos before diving in a bit more into it So this one is a from a paper from Microsoft Microsoft actually built this demo and it allows people to upload two different audio files So someone speaking and it allows to identify if it's the same person speaking or if it's or if the audios correspond to different persons So it's a voice authentication And this one is using a model called Joe Joe gun. So it allows to do face stylization So here you see myself in Disney style And I just you will be thinking like okay. This looks extremely complicated. I don't want to do this But don't worry. It is really quite simple and I will show the building blocks of radio Yeah, and this is everything So first you import radio Then you have a interface so the interface is the main part The main building block of radio and it has three components or three parts So first it has input or a set of inputs So in this case the demo is a demo in which a person uploads a picture of a of an animal or any picture And then it will classify what what animal this is. So for example, this is an American alligator So the input here is an image And then you specify what's the output so the output can be text image audio in this case It's a label. So a label just means that it's the science a probability of a classification to this image and then you have a Prediction or inference function, which in this case, it's called classified image But what is very nice about radio is that it's not tied to any ML library So you can just create it with any code. So classified image is just any a Python function that takes an input Does some magic and then has an output But this can be anything that you want. So this could be a scikit-learn model. This could be a TensorFlow model This could be raw Python code. This could be using a whatever tool that you in your School in your university in your company, whatever tool you use you can use within this function The other part of this nice is that you can run this anywhere So you can run this in the terminal if you like Jupiter you can run this in Jupiter if you like co-lab You can just type in Google co-lab So I've talked quite a bit and I will show now code because I think that might be nicer So in this co-lab, and I can show the code later I'm just installing a couple of libraries that I will just but here I'm showing Yeah, I think this is better here. I'm showing the same building blocks So I have a radio interface which as an input has text and it will output text and that it has a Prediction function which in this case it's called grid. So great is really just saying hello and then the name I Said before that it could be any Python function. So this is also allowed It takes a couple of seconds I will zoom out a bit But pretty much here I have my demo and I can write here test. Hello test Nice, so I can write hello mer and that's pretty much it. So now in Ten lines of code I was able to build this nice Maybe a bit silly hello word demo But what is nice is that this grid function could be any code that you have So many people it mentioned before that don't know what hooking face is so hooking face is an open source company That has a bunch of open source libraries one which is very famous is called transformers transformers allows people to easily use Existing pre-trained transformer models transformers are very popular architecture nowadays in the natural language processing and computer vision And audio domains in machine learning Don't worry too much about it But what is very nice is that you have access to over 60,000 Public open source demos shared by research labs by the community for many many different applications for hundreds of spoken languages, which is quite nice and in this a Sell a first. Let's look at the bottom part the interface I have a predict function which is same as before we will look at the function in a minute We have a set of inputs which in this case I have text box And I can specify a bit more information around that and I have an output which will be text You can add a bit more style if you like to do that so you can have a title and some examples So examples are very nice. It's a way of Telling users what kind of things will work in this demo? So I will run this And in the meantime, I will show the predict function This is very specific to the transformers library But again, you can use TensorFlow so I could learn by torch or whatever a Python ML library studios But here what I'm doing is I'm loading a translation model Which is shared by the Helsinki NLP university which translates from English to Spanish and in the predict function I have the inference part So here I'm pretty much passing the text to a pipeline and then I'm extracting the translation text So this is the demo I created with those again 15 lines of code So it's has a nice title interactive demo blah blah blah It has some examples at the bottom left which I can click and then when I click submit It will do the inference and then I get a translation So I like the workshop my Ustaz the tiger, which is the translation in Spanish and That's pretty much it Now you might be wondering where does this model comes from the Helsinki NLP Opus empty in yes, so hocking face as I was mentioning before has Thousands of models for many different applications So for example, I will just copy paste this, which is the model ID. I Will paste it here So the Helsinki NLP group has open source a 1200 translation models for many different combinations of languages So for example, this one is English to Spanish and it has a bunch of information here So this is called a model card and a model card is pretty much a way in which people can document What their model does so a if you are a person that does not know how to train models But you're a software engineer So you know how to run code and you know how to use existing models The model hub is a very nice way because you have and again all of this is open source and free It's kind of a github for machine learning So you have access to models for image classification Segmentation for NLP you have translation sentence similarity you have for audio you have for tabular data You have for reinforcement learning So you really have access to models across different modalities for many different applications and the model cards are the way in which you Will find the documentation of what a model does and what it's supposed to do So in this case I was just picking this translation model from Helsinki, right? That was it for the first demo time So now I built a nice demo I put it in a code lab, but then I don't have a way to share it with the community, right? So if you have a very good eye, maybe you notice that at the top here It created a public URL and that means that right now if you go to this URL in your computer or in your phone You will have access to the demo, but this will only happen as long as the code lab is up, right? But if you want to do permanent hosting a hugging face has also this tool which is also free, which is called spaces So spaces was launched around October of last year and it allows people to host and share with the community their own Their own demos with radio Since then people have created portfolios have organized university courses with these So it's quite nice and nowadays we have five thousand or six thousand spaces, which is quite exciting So right now you can go to a hf.co Slash spaces and you will find many spaces share by the community and again this code for all kinds of applications So I will try to do it more a quick live demo of this but It might take a bit to build but yeah just to show it. So here I went to my profile picture. I create I click new space. I Wrote the name of the space. I will select radio, which is the open source SDK. I'm using today I will click create space and Under the hood and hugging face everything are git based repositories if you don't know what it is Don't worry, but for those who know Just as in github you have git repositories and you can do git clone and git push and collaborate with git repositories In hugging face spaces are the same. So under the hood they are git based repositories Which means that I can git clone this repo work locally and work with other people and collaborate but if you prefer to not use git for any reason you can also use the web UI and What you need to do is just create an app dot by file and I will literally copy paste the code I have here in the code lab Here, so I'm literally just copy pasting. I will click commit new file and Here in files. I can see like all the files that are here and I can see the git history So it takes a bit to build maybe a minute or so But in this case it will give me an error A Because I don't have transformers installed here. So something that you can also do with spaces is create your own requirements So Like this. So requirements that when you do peep install requirements, so this is the same thing So I can specify which libraries I want installed so transformers transform Merse sentence piece and By torch So when I click commit new file, it will start building again and it will set up the demo after a couple of seconds A couple of minutes the first time the outcome will be something like this Exactly this which is exactly the same as what we were seeing before in the co-lab But now it's in the browser So now I can put this link in social media and I others can come and try this out So they can click hello, my name is a mark click submit and try With their own text, right? So that's quite nice and as this is open source People can go to files and they can see the history of the report and they can even open PRs to do some suggestions or modifications and They can even click here in the link it models and it has a direct link with the model that was used To create this demo, which is also a nice addition Right So that was the demo time. So three reasons you should build IML demo So we talked about accessibility. So really this allows anyone to try out So we've seen people from psychology, for example, trying out demos and finding some interesting issues that were not Catched by the people that were creating this demos It allows to understand real-world limitations of our own research So that's especially interesting for the research perspective and it's really easy You just saw these ten lines of code, but really a Even if you're building very complex demos That's pretty much everything it takes so you can go to to great adult app and it has components for text for check boxes for radio for Dropdown images videos and so on and even for 3d objects and whatever So it's quite easy to use it has the same building blocks So input output and an inference function and it allows you to use Python to build some very nice demos And what we've seen in the past is that people are sharing this in social media, for example And that has allowed some of these demos to go a bit parallel, which is quite nice Some people are building their own portfolios of machine learning with this and they are they are paying two jobs with that and apart from that we've seen a People doing all kinds of things for example some universities what they are doing is that they are hosting their demos of as a way to show their final projects and I Wanted to quickly show you a couple of last demos So this one is telemini which is the one I showed at the beginning and these are public So that means that anyone can go and try this out with their own input So these are really not cherry pick this one. I just wrote minions attending to our Python hackathon And then you see the minions There is this one, which is called YOLO. YOLO is a very popular object detection Algorithm and there have been many versions of YOLO in the last couple of years, especially in the last six months And it allows users to upload their own videos and then it will output a Video, but with object classification. So here is a person and so on This one is from CBPR, which is this very important computer vision conference I was talking to you before and it allows someone to write some text here All right Then a target language So for example Korean and then it will give you a translation in Korean and it it will generate a video. I Don't have the audio. Don't worry, but it has a Talking face and this face is generated. So this is with a model And with an audio translated with text-to-speech. So all of this is using machine learning and if you don't Like Python and you really like JavaScript. That's totally okay So in spaces you can also host JavaScript with flask or even with fast API you can host your own sites here So for example, this one is a this Pokemon does not exist. So if you can click submit and it will generate my own a Generated Pokemon for this conference Which is not that too nice, but it's green. So I guess that that goes with the topic. So that's nice and then for very Important libraries or very popular libraries or keras or a pytorch for example in which they also have their own official Examples or their own hubs of models There are some very nice integrations. So for example with a single line of code There's an integration that allows radio to load and run inference on an existing Transformer models on the hub. So it allows to use any of these 60,000 models without running these 10 lines of code So just that with a single line of code But also for example, Keras has some very nice official Keras examples. Keras is a very popular TensorFlow library and The community and all of this is open source done by the community has created 81 different demos in which anyone can go here and Try out any of these demos and also the open source almost 100 models Which is quite nice So this sounds Interesting and then maybe one question is I want to to get involved. How can I do it? So in the next couple of days or from today until next week, we are organizing a hackathon It's online, so you can just join if you are someone that is new to machine learning You can also join because as I just shared Really, you don't need to have ML expertise to be able to create these demos and we can also share a couple of ideas On how to build some of this so I think the email was shared by email as well But you can go to this link and I have a couple of cards I can share after this talk, but anyone can join actually will already have 70 members here Since this just started today that there are some instructions here to help you Sorry for the scrolling But there are already a couple of demos up there that you can explore if you want Again, all of this is free. All of this is open source All of this is using Python based libraries and we also have a table right next to the to the food Which has a bunch of a hugging face stickers, so you can also come in the next couple of days We will be around and you can also grab some stickers That's pretty much everything I wanted to present today. Yeah, thanks a lot Should we make questions? Yeah, are there any questions? Yeah Should we use microphones or Thanks Hello. Hey What are some self-hosted options for a radio? Yeah, so great. Oh a It's just a very simple a fast API a program Which means that you can pretty much host it wherever you want So spaces is nice. It's free and you can just use it. It has some machine limitations so if you want to host a model that Wates, I don't know 100 gigabytes. You will not be able to do that So if you have very custom cases or if you would like to host it internally for your own company You have a couple of options So if you want to use the spaces you can also do private spaces and it has organizations So you can just share it within your organization But you can also just use Google Cloud or Amazon web services or whatever provider to choose Again, it's you can also even run it locally. It's it's an open source tool So, yeah, so you have this app.py file. So you just do great. You do Python Yeah, you have a flag for that, but yeah sure By the way, if anyone is interested in exploring I really recommend to go into hf.co slash spaces Because here you can explore all of these spaces created by the community and you can sort them by most likes So you can find some of the most exciting ones. So for example Anime gun was once sure a couple of months ago and it allows people to upload an image and It will convert to anime style, for example Yeah, cool. All right, do we have any other questions? Cool, if anyone else has questions I will also be outside for a for the next couple of minutes and then you can also come to the table in the ground floor I will also hand out some stickers patches and cards if anyone wants Thanks