 Welcome. Hi guys, I'm really excited to be here. As being said, my name is Dara. I'm a data scientist at this company called CIC Asia. Most of you probably know it as a job street. And yeah, I'm actually not from Malaysia. I'm originally from Kazakhstan. So in this session today, we're gonna be talking about Firebase ML Kit AutoML Vision Edge. This is something new they came out just recently. And I would like to also just give you a short brief what we are gonna be talking about today. So for those of you who might not know, I will be talking about slightly about Firebase. What is it actually? And then we will jump into the ML Kit and we'll go through the ML Kit, what it already has. And then we'll jump into AutoML Vision Edge. So, but before we begin actually, I would like to ask you questions here. Don't worry, it's okay if you don't know. So how many of you have tried Firebase before? Could you please? Wow, that's a number of hands. Okay, cool. So the next question will be, how many of you knows, heard about machine learning before? Awesome, you guys are so smart. Okay, so I would just wanted to, before we actually go through everything, I would like to thank Google because there was this woman tech makers scholarship that they awarded me and I got a chance to go to this amazing conference, Firebase Summit Madrid 2019. It was cool, it was full of so many smart, intelligent people, so many interesting talks and insights and again about Firebase ML Kit as well, which was partially because I was there. And yeah, this amazing woman over there, I was actually surprised that there's so many women in tech by now, all these women were there for the briefing breakfast talk. Yeah, so a quick one about me. So I joined over 20 plus hackathons in the last two and a half years, which is here, I want some, as you can see. And then spoke at around 20 meetups workshops talks, first time in Singapore. And then passionate about tech, love tech is facts. So by the way, I'm gonna be distributing some just right over here, so stay tuned. Make sure you remember everything I'm talking about here and you might get one. And then, yeah, I hate durians, I'm sorry. Yeah, so next one. What is Firebase actually? So for those of you who didn't raise your hands, it's fine, I'm just gonna quickly introduce you to Firebase console as well. So Firebase is a backend as a service BS chat started as a YC11 startup and grew up into a next generation app development platform on Google Cloud platforms. So you could see like a lot of projects had been integrated with Firebase together with Google Cloud. This is how it looks like, all the projects that Firebase has up to now. So today we're gonna be talking about ML Kit, but other than that, it has quite a number of other products out there. So if you're a software engineer, you could actually build an app only using Firebase without actually having to worry about all the things that you need to build better, like improving the quality, grow your business. Firebase is there for you to help. So the next thing, what's machine learning? Machine learning is actually a field of study that gives computers the ability to learn without being explicitly programmed. So in another sense, it's like you don't have to know, like you don't have to be really particular about the rules here, it's more about you stop the answers, you stop the data, and then it comes out with rules to you. So this one is funny, I like to actually give this kind of memos. So back in the day, how it actually happened. When the user takes a photo, the app should check whether they're in an initial park. Sure, it's easy. Just give me a few hours, and then check whether the photo is of a bird. Okay, I'll need the research team in five years. So that was back in the day, that was previously how it was actually, how hard it was to implement machine learning into your applications, into your software, and integrated because you need the team of researchers, you need a huge amount of data, a huge amount of research to be done. But what do we see now? We see a huge interest in deep learning, which was 50 times more than in the last five years. So from here, you can actually say that the models, the prediction algorithms had been improved over time, and now it's really, really great. From 26% errors in 2011, now it's only 3% errors. We see in identifying, let's say there is a little part here, while humans has 5%, so machines are smarter than us. Okay, so just to give a brief intro on how it actually works, in software development, we have rules. We have this, let's say for example, if we want to describe it as a walking, you just need to specify it is speed less than four, then the status is walking. If we were to go to running, we want to add the running, so we need just an L statement here, and then we can change our status to running, and so on and so forth. But what if we want to describe golfing? We can't really describe it as speed, right? So that's where machine learning actually comes in handy, because in traditional programming, what we have is rules and data. We stuff it in, we get our answers, just like you saw in the example before. But in machine learning, what we give it is answers and data, then we get back with the rules. So what we get is something like this. I know it's not really giving us anything, but that's actually something that machines can understand in zeroes and ones. Well, machines are all about zeroes and ones. So we give a label and we give our status as a walking, and then the rest of the things, and it comes out as like label golfing sort of. So then we can actually say what are the rules needed for identifying whether the person is golfing or not. So this is like two examples of how we can actually train your model. One is you give answers and data, and then it comes out with the model. The other one is in first phase is like you just stuff in data and then it comes out with predictions. Those two are also kind of referred as supervised and unsupervised learning. In supervised learning, you have your labels already just now as walking, running, and then you can actually get the model. But in unsupervised learning, you don't have all these labels. So the model actually comes out with some sort of prediction based on your data. So you don't necessarily need to know the answers. So how does it work? This is the most common example of machine learning for deep learning, actually. It's basically all the numbers, handwritten numbers, how machines actually can identify for us. Of course, we understand that it's just eight, written in different like formats. But for machines, it's really gonna be tough to identify whether there is eight or not because it could be located in a different position. It could have like some weird eight which looks like G. So many, many possible outcomes. So how you actually train it? This is how you are actually gonna pass it to your model. You see the eight here, it will turn into array of numbers and then what will happen is it will go to the convolutional layer and then it will train it. So that's how you actually do it in machine learning. Basically, it will just come up with layers and layers and layers and then it goes back with the result and then if you want to train it more, you can actually pass it again with different hyperparameters and then you do it back and forth and then you can get your own model. So that was back in the day. And it's still happening actually, researchers are still using the same layers, the same neural networks. And those are the steps that you actually do to implement machine learning. So first and first, you prepare your data. You develop a model, train, tune and evaluate the model. This is a huge part as well. Then deploy a model, get your predictions. So in here, we need ML experts, data scientists, all these kind of smart ass people out there and then they require a few years to train it, to test it and all that and then only you can actually deploy a model. By the time you develop your model, it might not be relevant anymore and then that's actually what usually happens. But now with ML Kit, it actually simplifies the whole process. What it does is that instead of taking all these steps of preparing, training data, developing model and training it, you can actually just get the select model from the ML Kit, get the API, simple code of calling that API done, you get your predictions and you will see that later in the demo. So this is actually all the ML Kit products that they have as of now. They have text recognition, they have star code scanning, page detection, image labeling, landmark detection, object detection and tracking, which is something new and then nature language, language identification, smart reply or device translation and then custom is like model, serving and then this is the one that we are going to be covering today, auto-malvision edge. Well, before we actually jump in later, I would like to show a bit of a demo on my app over here for text recognition, for landmark detection and a bit of face detection as well, just for fun. So let's jump into that demo. So, okay, just get the next image. So how it works is that we just implemented some small code, some small portion of code in our iOS app and then what it does is that it can detect the face over here. So it then comes out with like, where is the head of alerted angle, all these kind of angles where you can actually use it to probably predict some, I don't know, emotions. Like if the person is smiling, probably the X and Y axis goes up, then from there you can actually get the idea that oh, the person is smiling in the picture. And okay, I can do it on the image. What if I wanna try the video? Yeah, so you can see. Okay, it doesn't detect you guys. I'm sorry. But yeah, so it basically can identify your face and all the features. And it's quite fast. It's like, no matter how much I move it, it's still tracking my face. Whereas compared to, I've tried OpenCV before, it was just super laggy, super slow. And yeah, I'm sorry. But Firebase is actually making it seamless and fast. And this is actually much better than it was before a year ago when they launched it. And then next one, let's try text. Where is the, there you go. It's seamless, you can identify it fast. You can identify there is a gallery, please enter one. And then it comes out with the text itself. So let's try image labeling here. Okay, sorry. Where is it? Okay. So this is on device. By the way, I forgot to mention. The ML Kit, it comes with on device or on cloud. On device is basically whenever you upload your app, it will come with the bundle. But on cloud, the model will be run on the cloud, on Firebase itself. So which means it's a better accuracy. But then if you want something simple, you can still use on device. So this is on device. Let's see, oh, sorry, this one is wrong. Then the next one I would like to see is image labeling. Yes, this is the one. I would want to take out this image. And then, yay. So this is actually the problem I'm gonna be, we're gonna be discussing for auto mail. So you can see that it identified the wedding dress and then all the other things like gown, clothing, bridal, clothing, bride, whatever. But one thing I wanted to do is, I wanted to identify whether it's a Catholic dress or not, whether it's a Japanese dress or not, whether it's like, which culture does this dress belongs to? And the API doesn't do that for me, so what should I do? I'm gonna go ahead with auto mail and try that out. So auto and mail is actually, what it will do for me is that instead of all this, I'm gonna prepare my own training data, all the hundreds and hundreds of images of wedding dresses. And then what I'm gonna do is, I'm just gonna pass it over, it will train the model for me. That's where auto mail is super useful because I don't have to tune it, I don't have to train it, I don't have to go through all of these steps, it will do that for me. And then actually, I can also deploy it instead of having a DevOps guide to do that for me. So this is the problem statement. I collected a few number of images of the Catholic wedding dresses versus the normal wedding dresses. So you could see that for human eye, you can identify that, oh, there is this ornament over there, there is this head over there, and in here you don't have it. But for machine to understand it, it's still gonna be classifying it as just a wedding dress. So the steps goes as follows. I took the normal wedding dresses, I took the Catholic wedding dresses, the pictures, around hundreds of them. Then what I did is that I passed it to auto mail vision edge. And then leave it for a few hours and then comes out with the TensorFlow Lite model for me, which I could deploy on device or whatever, it doesn't matter. So let's do some demo. So what I'm gonna do here is, I'm gonna go to the dashboard itself. So ML Kit is located over here, and here are the other products that you could be trying out in Firebase as well. So the optimal is located here. I'm gonna go ahead and add dataset, call it like, I don't know, I called so many things already, wedding, KZ, sorry. And then what I'm gonna do is I'm gonna create it as a single label classification because I only have one for Catholic dresses and one for normal dresses. So then it will direct me here. What I have to do here is I need to browse for files. Okay, it's like it in my dresses. So those are the files I'm gonna pass. All the images for Catholic dress and all the images for normal. But instead of that, I will just pass it as a zip file to make it faster. Where is that zip file? Okay, I can find it. So I'm just gonna pass it as it is for now. I mean, you can do it both ways. You can upload the zip file, you can just select all your images. So this will take some time. I've already uploaded my images over here. And you see that I added two labels, which is Catholic and normal. Actually, advisably, you input more data than just 100 images because I got the result that they're just too good to be true. So there is clearly an overfitting here. But for now, for just the testing purposes, I've uploaded only hundreds for each because I mean, I'm lazy. There's so many pictures I need to take off, right? And then what I'm gonna do is I'm just gonna trade a model. Again, it will take some time. So let's quickly jump into already trying the model. So you could see that it's quite, I mean, self-explanatory. I get the precision, I get the recall, I get the latency. And I also get the true label, predicted labels. So in here, you could see that there's overfitting wise because it's too good to be true. Like it's almost 100% accuracy. It's just, I don't know. If I were to stop something else instead of the Catholic dress, probably won't identify one. Then there is precision, 91.43%. Again, too good to be true. Recall is too good to be true as well. But one thing good here and really informative is that I get like 94.4% of identifying as normal weddings, wedding dresses, but I still identify 5.6 of them as a Catholic dress, which is still okay. But the Catholic wedding dress is getting 100% accuracy. In here, I would be suspicious already. I would probably add more pictures of wedding dresses and retrain it. Okay, so the next step comes in. I would like to test it. So here comes the interesting part. I'm just gonna stop the picture that we took of ourselves in the wedding that I attended recently. By the way, Catholic weddings are awesome. So in here, you see the bride and even if there are many people over here, it's still identified as a Catholic wedding dress instead of the normal wedding dress, which is cool. I get to get like probably useful insights for myself. But what if I stop some random image? Let's just see if it's actually gonna work. I didn't try that yet. So that's why they say don't do the weddings. Okay, that's just my friend's picture. So you see that it's still identified as a Catholic wedding dress probably because of your facial features because she's from Kazakhstan as well. So of course it's not ideal. And as I say, it's over-free thing. And you could clearly see that if you give some random images, of course it won't be able to identify one. Okay, so next thing. Probably you might be wondering how accurate it is. Okay, that's good. I can just simply collect my images and then pass it over. But how sure I am that this model is actually good enough? Well, it's actually good. It's actually pretty good. It's 1.8 times better than the usual mobile net we too. And those models were very hard to create very algorithm-intense algorithms. And it's actually quite good in general. It's performing well. But what if I want to add one more? I want to add Japanese dresses now. So it will still, if I were to pass it in, it will most probably identify it as a Catholic dress again. But what I'm gonna do is I'm gonna train it again. And now with another folder called Japanese wedding dresses where you just collect all the necessary dresses for you. And then the question will be how do I switch? So I developed a new model. It's much better than the previous one. So how do I switch? The next one, you can actually switch by using the remote config given by Firebase itself. It's like a separate product on its own. So what it actually do is that you would simply create a new config parameters and then you switch it. With a new model. Of course, you can directly deploy with your new model, but it's just not advisable, I would say. And then what I'm gonna do is, in my iOS code just now, I'm gonna create a remote config, file and Firebase remote config, and then get the string of the MyModel. So just now you see that we call it MyModel. I'm just gonna get it as it is. So for example, if I were to be in Japan, it will switch over to the Japanese addresses. If I were to be in Kazakhstan, it will switch over to the Kazakh addresses. But where do I get the data? Okay, the way I did it is I just Google, I just collected all these images and stuff it in. But that's really tedious process, right? I'll have to go through each one by one, one by one, make sure it's a Kazakh wedding dress and add it in. So how does Firebase actually propose to handle this? Well, they created this custom image called Sophia app, which is open source. You can go ahead and try it out. Which basically you get your images, you just click like Apple, you just take pictures of the Apple and then you train it. So in the app itself, the next user comes in, they will see all these images already. They don't have to go and get them by themselves. But of course, for my case, there for sure wasn't Kazakh wedding dresses, so I still had to do it on my own. Well, for sure upload one, you can get it later. So that's how it actually works. And now it is the crisis time. So for those of you who listened closely, let's try question one, what are the original steps to implement machine learning? Anyone? And then? Okay, the girl over there. First, you must prepare your training data first, which is like a wedding dress, they are there. And then you develop a model. Train, as I was saying earlier, you also have to set it very, very little model if okay. Yes. You will have to deploy and then your app will build a prediction. Yay, okay, cool. So we are getting the first prize over here. I'm surprised it's a girl. Good. So yeah, I'm just gonna pass it over to you. Thank you. Congratulations. Don't worry guys, there are four more questions over there. You still have a chance, but you won't get the socks. Next one, how are model security changed from 2011 compared to today? 3.1? 3.5, 11, 6, 8, 8, it's only 3. You all give me some random numbers, give me the actual numbers. Okay, okay, good. So, who is it? Can you get your name? Davish. Davish, congratulations. Next one, how fast an interest in machine learning has been for the past, in the past? Okay, he was first, sorry. Oopsie. Okay, next one, traditional programming versus machine learning, what are the key differences? Okay. You give the machine, in the traditional programming, you give it rules, but in machine learning, you let the machine actually lose. Good, good, congratulations. There you go, good one. Okay, then the last one, sorry boys. This is for girls only. Well, I mean, the price itself is a bit of a girlish ride. So what's the on-device recommended model to use once Automail is done training? It was there in the picture. Come on, guys, I mean girls. Sorry, girls, do it. Else I have to pass the earrings to guys. Oh, okay. The earrings are all right. Yay, congratulations, you get the five-way earrings. Okay, good one, quite fast. I'd like to finish this off by saying that, okay, Automail is still kind of, I don't know, alarming for most of the data scientists out there, because it might just simply steal our jobs, but it's still not up to the point. It still requires your involvement into it. You still require to collect your data, you still require to tune it if something goes wrong because some models are just specifically good for some use cases, some are not. So if you have a simple use case, you still can use Automail, but if you have something more specific, let's say the example just now at the show, like I want to identify what is the type of dresses, then probably you need to have your own input as a data scientist. So I would say my job is still not threatened yet, but it's coming, guys. So I would like to end this by saying, thank you very much for your attention. I hope you guys enjoyed it. I hope you guys enjoyed the prizes. Here's all the, thank you. Yeah, so if you wanna contact me if you have any other questions, here's all of my contacts listed below. So yeah, thanks a lot.