 Selamat pagi. Selamat malam untuk semua orang di sini. Saya Desmond. Seperti yang anda lihat, saya dari SMU, Singapura National University. Saya pesan pelajar dan... Saya di sini untuk bercakap tentang latihan masyarakat. Ya, okey. Jadi, ini hanya sebuah kata-kata tentang apa yang kita akan berkongsi hari ini. Ada beberapa perjalanan kerana apa yang saya cuba berkongsi adalah apa yang saya cuba berkongsi. Okey. Jadi, ini adalah kata-kata tentang apa masyarakat yang kita akan berkongsi. Okey. Sebenarnya, tanpa masyarakat kita, tanpa perjalanan kerana tidak semua orang boleh berkongsi untuk mempunyai masyarakat di rumah. Okey. Jadi, sebuah kata-kata tentang saya. Saya berkongsi beberapa tahun di Australia kerana berkongsi beberapa perjalanan. Kemudian saya balik ke M115 untuk membuat apa yang saya cuba berkongsi dan sekarang saya akan berkongsi tempat kata-kata apabila saya memperkongsi kata-kata kemarian sejarah kerana kerana kemarian sejarah. Okey. Jadi, ini adalah beruntung, semasa ini adalah kata-kata kita yang terbentuk di mana kita berkongsi untuk Sama-sama, kita akan bercakap tentang konsumtsi smart dan lelaki-lelaki. Bagaimana untuk membantu kita menjelaskan itu. Sebenarnya, saya sedang bekerja di mobiliti urban dan konsumtsi smart. Untuk itu, sebuah kawan saya yang bekerja di sini. So, salah satu perkara yang penting di Singapura, sejak tahun lalu. Rally Nasional Day. Sebenarnya, dia mempunyai sebuah warga daripada diabetes. Jadi, kami sangat risau tentang ini. Ini adalah untuk MOH, sebuah statistik daripada tahun lalu. Sekarang, saya dan ibu bapa saya semua mempunyai sebuah ulas kronik. Jadi, diabetes adalah salah satu perkara yang penting untuk lelaki dan lelaki. Dan semua perkara ini. Lihatlah, perkara ini adalah... Kami ingin menjelaskan rekod dari bahagia. Bagaimana anda menjelaskan halari. Jika saya tidak membuat latihan. Biasanya, dalam masa lelaki-lelaki atau lelaki-lelaki, akan beritahu anda untuk menulis sebuah daerah. Dan sebabnya, anda telah menjelaskan dan melalui. Sebenarnya, ia sangat teruk. Sebelum kita memulai, kita akan memulai perjalanan kembali. Di mana yang banyak kita mempunyai aplikasi yang membuat ayah. Di mana kita mempunyai batu-batu dan kekosongan, dan kekosongan, dan kekosongan, dan kekosongan. Jadi, rekod kerja di masa lalu. Apakah anda menggunakan mobil? In research, we created something called Food AI. It took us about a year to collect a lot of foods, all local foods in Singapore. And we work with HPB where the division there will help us to actually annotate them. And every single food such as chicken rice, bakut teh, roti prata, where every single calories, all the nutritional values and information is all being packed. And you can go to this website, foodai.org. You can actually request an API to put in your app itself. You upload an image to the server. You will immediately recognize what food is there. And in the near future, we will get more information like, more mental data like the calories, the fat, the sugar, all these things. So with this, this is a research technology. But how do we convert this into... I won't say commercial but public usage. So anybody use this app before? Okey, so it's actually powered, one of the features, one of the segment inside is powered by Food AI. If you look at this orange dot on the camera, if you take a photo, you will immediately recognize the food that you just took. It's all through a lot of deep learning and machine learning from our back end. And the process is quite tedious from time to time. So first, we spent a year collecting mobile phones, pictures, we Google image and crawl, everything. We created a new auto crawler to crawl the image from Instagram, publicly available Instagram, Google images, everything for a year. We currently have about, I must say, about 2,000 or 1,000 food items in Instagram, local food only. We also have problems like, we have new foods coming in like Malas, Yang Guo. We cannot really annotate them because it's quite a randomized picking or typen. It's quite difficult, but we are trying to solve a problem. Also, another thing, Singapore we have, they see, they owe, they ping, all this thing, that all looks very similar. And sometimes the calorie and nutritional value is really different. So we can only try to group them in group and give a general output and value itself is just for instructional. So if we look at it, the second one is food image recognition. Then we analyze them. Then of that, healthy dining applications. So with full AI, you can actually use this for your apps if you want to. It's all, it's a currently tested and proven stuff where it's working. Everyday we are collecting about 3,000 images from users on HPV apps. So with that, we need a very powerful machines and back-end infrastructures. This is an infrastructure from Black, which is my parent abyssal centre. We currently have 80 servers and we are one of the first institution in Asia to have an NVIDIA DGX server. We recently acquired two additional ones, Tesla V100. It's way more powerful that we are running on two specific research. Generally when we do demos, we use this GPU server just to run the demo. And obviously we also do some search data mining with elastic search cluster. And just to share, this is how the system architecture looks like for full AI. We have front-end, back-end and offline model. Where after we train the model from the back-end and we work, the end of date for this thing. You guys can see, right? So this is a traditional machine learning and deep learning model. Where you have front-end, back-end and offline. So once you think about it, like how if you can do something, if your phone can do vision recognition at all the nutritional values on the front-end only, without any data, without transferring information to the back-end and back-end form. Sometime it takes quite a while. Like recently, the new HVV app came out. We upload an image to the HVV server. It took about 10 seconds to get back the image recognition because of the server issue. They are upgrading, hardening it due to the certain data leaks issue. So, this comes to CRE-NML and COINML. This year, Apple WWDC conference actually provide a very good insight on CRE-NML where you can actually create your own machine learning model without an enterprise server. Just by using your own Mac. You need to upgrade the Mac to the latest Mojave Operating System and iOS 12 because the operating system actually comes with their own machine learning packages. So if you are using iOS 11 or using the older, higher-serial, then sorry, you can't. But it's easy to download it and try it out. Just a quick recap on CRE-NML itself. CRE-NML was announced last year and this year we have CRE-NML too. Okay, this is a very simple way to sort of integrate a train model into your apps. For example, if you are using an iPhone obviously, your photo library already come trained with image recognition of your face. If you look at it, they will actually recommend you that okay, is this you and that and you annotate yourself. And from there, it actually allows you to tag and as more pictures come in you automatically tag it for you. Ya, this is their own vision. Ya, so if you look at it, you add vision, natural language processing and game base, all this thing. CRE-NML is the elementary to help process all this thing through your Mac or your phone processor. My first computer was 400MHz, I think. Ya, but our iPhone now is about 1GB do a call or quad call. It depends on which phone you are using. So they are definitely powerful enough to run all this processing while you are not using your phone. When your phone is plugged in, you run it back end. Okay. CRE-NML, this is a very interesting part that I think this year we happen to attend this is the key takeaway from the whole data. It create a very interesting interest to the whole industry especially the developers like. You can do a lot of things with machine learning with CRE-NML, where you can create your own libraries. Ya, this is just a three simple thing. To run a default machine learning image classification you only need three lines of codes on Xcode and everything will be fine. If you have a super powerful MacBook with a GPU, it will run by the GPU itself. We can actually customize what you want. Ya, but machine learning is not for everything. All you need to start is a problem. But you have to ask yourself a question here. Do you really need machine learning on your app? Ya. If you are trying to solve a problem that can be solved with machine learning then go for it. But if you think that you want to try it out, CRE-NML is also a very easy way to do it. I myself is not a machine learning engineer by training. I'm a network engineer. And recently I've been testing out this because 4AI is only on the back end transfer. So we are looking at using CRE-NML to solve a problem on the front end. Our data model is about 3GB. It's not possible to load it front end into our apps. Ya, but we tested a small subset of our data model. Just five different items. 15 kilodots. But they are doing the same thing. Almost the same. I didn't train it well enough. Ya. Well, I get this image from Apple so it's quite easy to explain from their perspective. You have a problem but to solve a problem you need to collect a lot of data. And with data you need to train them to make sure that the machine knows and what you're trying to achieve. But after training your data, obviously you need to make sure your data is correct and doing what you expect it to perform. That's how you evaluate them. And subsequently you export it as a CRE-NML model and plug it into your app. Currently CRE-NML only have these three functions that we need robust enough as compared to other machine learning languages. We will talk about images today because it's the easiest and the most fun one at this moment. By looking at this image and through CRE-NML we will return chicken rice. This is how the model works. And with 18 kilobytes it's really small as compared to 3GB of data model. So, let's quickly do a quick demo. As I said this is just the three lines that you need. Can you guys make it bigger? So, the first line obviously you need to ML UI. Beside UI machine learning you can actually use terminals or codes. But for demonstration purposes UI is easier just to speak a million words. So you build a builder, classify you build a builder and show in live view. So this is live view. And if you realize that it says draw images to begin training. But obviously training part. Where is the data? In view of time constraints I actually just drag out a few food items. From our food AI depository. So we have bak kut teh black carrot cake, chicken rice white carrot cake and if you realize there's two things here we have one time shrimp noodles dried noodles, shrimp dumplings and we have a tom yum curry rice that's like laksa curry noodles tom yum noodle soup. Why are they categorized with this? It's because they all look really really similar. And they are all generally clustered into a same genre of food items. Curry noodle soups. Okay, obviously data collection is really a tedious part. Okay, why I say so is like we look at it you have to annotate them in a list. Okay, this is just bak kut teh. Okay, yeah. We have thousands of images that we actually crawl from the net and annotate them to make it display. And you can always do pre-processing like whitewash them rotate left and right to make it more recognizable by the app itself and the model. So what I'm going to do come here okay there's a few pre-processing procedure you can do iterations training data, validation augmentations, like crawl, rotate, blur, expose all these things. Adding additional steps cause a more processing time and powers. So we just do the default one. 10 iterations. Okay, so if I find the training data just select the main folder once you annotate them if you have a few thousand just select them then just open it. That's it. And you just train it. You do it by yourself. You don't need to write any Python codes or anything. It will tell you how many images have process. It took about 3 minutes to 5 minutes to process everything. My image library currently has only 2000 images. So, and I'm using a quad core 30 inch MacBook Pro. I try the older version of MacBook Air. It takes about 2 times slower. So about 15 minutes. Maybe today is faster it's about 400 image already. It's less than a minute. We're doing the quickest one the 10 iterations. Can I ask how you annotate them? We actually hire interns and partners. We spend a year. We spend about 300 grand to ask them to every day just search. You only have one job. Go image and find all the chicken rice images. It's kind of a fun thing but in order to make it more realistic, we really have to do the ground truth which is asking people to really find the images. Nothing beats human. So, you're feeding the annotations and images together? Yes. You just drag over and select the folder. Yes, you select the folders and create ML is really smart enough to know that you already categorize them. Yes, you can actually categorize them in a single layer with the name or you can categorize it in the subfolders. So, they don't this one category. We're almost there. What happens if the same image falls into multiple folders? Well, I guess we have some problem with the machine. So, you try not to in a way because let's say you're chicken rice. Things will happen. That is why there's a training module coming up soon. We're half day there. It's 2 minit, 1,250 images. What does the 10 iterations like between the 9 iterations Okay, so the more iteration it runs, it recognize more features for images. This is a layman's term to say. It's not really the actual thing. So, how it works is that the more you run, the more iteration to compare to each others then it take longer time to process. It's different folder image. For those image, they will filter and we have filter. Yes, because if you have filter the... Imagine that the app itself is an eye. There's a pair of eyes. So, the more images that the app seen in different matters with annotated wording the truth itself the better they can gues the better image. The same filter is a bit more yellowish. White watch basically just blank white So, what it's running because of time, I actually remove whatever the augmentations. So, if you look at it, we have crots that looks like this. It's just half pictures to recognize. And chicken rice will have augmentations and make it the color such as this one like this. It give you a lot of funny stuff to make it recognize like this one. We actually rotate it in a way. So, it's all done through code. All this thing. So, please done. We're almost there. Sorry, when you say it's done through the code means Okay, we create a script to do all this reprocessing. It's not possible to hire someone to do a few thousand or hundreds of thousands of images to rotate or crop all this thing. So, based on your image, what would you say is acceptable? For this demo I think 50% is acceptable. But if you're looking at research-research context, usually 80% and above. We have a few papers you can look it up by my colleague Stephen Hoi. So, he's a principal investigator in this project. We're almost there. Just now you categorize in a folder and then you drop the entire folder. So, the folder can only recognize those under that category or it can also be combined with another model level. Yeah, different level. Like say, training is the training folder. So, we have bakut day, carat gates, chicken rice, laksa, wang hami and white carat gates. So, this is the category that you want to identify. And then you try to be honest. What happens to this out of the subfolders? So, you see the accuracy is not very high today because it's 10 iterations, it's 78. So, after that if you look at here, it says drop images to begin testing. So, in machine learning let's say you have 1000 images you should do the best practice that generally is 80-20. So, let's say you will take 800 images for training purposes and another 20% which is 200 images that people that haven't seen before sorry, the machine that haven't seen before to let it train and test it out and see how good is your classifier. I'm not really a tax person I can't really answer but for tax we generally try to put certain keywords in between wording and see how it works. So, after train you should select the test folder and drag it in. You can drag it in or you can just do the same thing as what I did just select the folder. Okay. This will be faster because this images is smaller. One thing cool about this Apple default factory image classifier is that you will see the image running that give you a sense that it's processing and together with the logs below. So, that is why you see the training is 78% the validation of it is 78. So, evaluation should be around there or lower or sometimes higher because we are running only 10 iterations it's faster. If you run one iteration obviously the matching and the training the percentage will drop sorry what's your question again? Okay, so let's say if I put a folder of a drink inside, what will be the outcome? They will try to match whatever category is inside. Obviously you can also add another thing that is not found. Or will it be less than 10%? It will show the highest possible but there is another way that you can always write a logic to say that this is human this is in-animator obviously. Okay, we are almost done. Yes, it does actually it does if you get a very low-res images it actually have difficulty to try to annotate them because they go down to the low level to do it. I guess anything 10, 20, 4 will be okay. Ya because you don't actually need a good camera to take a high-res photo say So, this need to tissue that are machine learning, we need to be patient. The last time we run a whole subset model I think it took 3 days on the DGX I think we have like 3 million or 4 million images and running 120 iterations. Okay, done. So, the validation is 76% so if you look at here it actually give categories of classes for you. Precision What carrot cake is quite bad? Okay. So the highest is bak kut teh black carrot cakes and chicken rice laksa or this thing. See, they will actually show what is being predicted and whether is it true or not. Okay, like this one they predicted as white carrot cake but actually it's chicken rice because this is chicken rice. So, you look at it. So, create ML it's floss but it's easy if you want to do basic machine learning. You don't need to really code it but this one it predicted bak kut teh but it's actually chicken rice. So on and so forth. So, next thing once you're done they give you something to let you save it. And you should save it somewhere if you want to. I already save one that I trained previously so the accuracy will be higher. Which is here. Okay. And we have a coding code base here. So, basically you just do a drag and drop. Because one thing about the new Xcode and this machine learning model said the moment you drag it in it will actually create all the necessary fast and codes for you so you can just use it. Okay, and this is all output. Yeah. I will actually open source this demo so they can test it out for yourself. You study images? Yes. I can't give you the images but I can give you the model. Because the images although it's public domain but we copy write the stuff. Okay, so let me see. Okay, there's two interesting part machine learning. The interesting part will be vision. Okay, I'll just quickly go through the code itself. Okay, we have to import old video kit so we're going to do a real-life real-time machine learning to detect. Like OpenCV to know what is this code. Okay, we need vision so that they can actually look at it. Image, so we can do the image picker and taking our photos. Okay. We do all the classifications. And this is the output from the real-time capturing. Okay, and we update the class classifications with the wording all this thing. Okay, and this is the main processing. Okay. Yeah, and we find the first object which give me the highest accuracy. Okay. I'm going to quickly do it so that actually pick an iPhone X because I think it support 4K resolution. So if you happen to have one you can actually try it out yourself because in here you actually allow a 4K resolution to capture the image in high quality. Okay. Okay, let's run it. Wait, not me. Okay, so you can see, right? Alright, what's going on? Just encounter some technical difficulty there's no Wi-Fi here. We do a real-time recognition. So we choose any chicken rice here. Okay, that's good. Maybe this one? Okay, let's see. What do I get? So we have to run real-time to give me chicken rice. So this is a good way to play if you want to. Like, they actually recognize it in a way, but this might not. Just like what we say that what if we give you something else. It looks like curry noodles, tom yum noodles. So try its best. There's something that is not found if you want to. So say this one? It's definitely chicken rice. Okay. So like pak pute this might not work but anyway, let's try. Okay. So it's confused whether it's chicken rice, dumplings, or pak pute. Pak pute or chicken rice. So if you train more then you'll get more information. Okay? So coming back due to time. So what's next? It's like this is a very basic Kray ML demo. So if you want to test it out there's a lot of tutorials and free data set, free food data set online that you can download and use it and try it out yourself. And mix and match and see. Even Apple themselves actually created a data set to try. And I always like to say this. May say I meant to make but it's what you do that make a difference next time. I'll take question later on because I'm running a bit late on my time. Thank you so much.