 Kami berdua berdua dan kemudian akhirnya menunjukkan bagaimana AI mencabar semua ini dengan mudah bersama. Saya mempunyai mikrofon. Tidak? Oh, okey. Adakah ia bermulainan? Ya. Bagus. Terima kasih. Saya tidak memasangkan. Tak mengapa. Sebenarnya, saya ingin membuat pembunuh semasa saya pergi. Tapi saya rasa sekarang saya perlu mencabar semasa saya pergi. Mari kita gunakan. Okey, bagus. Satu kawasan. Baiklah. Okey. Adakah seseorang boleh beritahu kepada saya apa iot bermaksud? Apa kawasan, alias apa anggap motor CEO iot? Markawasan. Ya, tapi apa maksudnya? Apa maksudnya? Bila saya boleh begini dan sayaemici kapal itu bahawa saya membuat iot bermaksud, apa yang mereka menyebabkan daripada saya? Ya. Bagus. Jadi, saya rasa maksudnya iot bermaksudnya, saya ada banyak penutup. Dan kemudian sensor ini mengambil data, dan kemudian mengambil semua data untuk internet, yang penting adalah cloud. Jadi itu memberikan kita computing cloud, yang lain yang berlaku juga. Jadi sebelumnya, mereka akan berkata, okey, tentu saja, mari kita keluar dengan banyak devices IoT. IoT devices yang mengubah kaki, kaki kaki kaki, walaupun anda akan mempunyai bahawa anda menangis, berapa baik anda tidur, dan segalanya. Orang-orang telah mencari semua data ini dan mengambil kepada cloud. Jadi apabila anda mempunyai IoT dengan computing cloud, apa masalahnya akan berlaku? Ada sesiapa-siapa? Mari kita berbicara tentang ini, bila-bila. Ia sedikit keras untuk saya bercakap dengan satu cara. Sekuriti, bukan? Okey, jadi kita ada masalahnya, mari kita katakan secara secara secara secara. Apa lagi masalahnya kita ada? Privacy. Maaf? Privacy. Privacy? Saya rasa itu akan berlaku dalam secara secara secara secara. Standa. Maaf? Apa masalahnya? Jika anda meletakkan dari satu, device ini akan menyebabkan di mana-mana yang lain. Saya tidak faham. Apa maksud awak? Awak mempunyai platform yang setiap device akan menyebabkan di mana-mana. Jadi bagaimana cada device berbicara dengan setiap orang? Protokol komunikasi? Baiklah. Jadi saya rasa dalam protokol komunikasi, kita dapat melakukannya sebagai network bandwidth. Sebenarnya, jika kita meletakkan sebuah seksen network bandwidth, protokol komunikasi tidak menjadi masalah yang besar kerana kita dapat selalu meletakkan protokol, tetapi dalam proses network bandwidth, untuk berkata bahawa jika saya meletakkan setiap detail yang kecil pada cloud, ia tidak efisien? Jika anda meletakkan internet bandwidth, anda hanya dapat meletakkan banyak data per sekian. Anda tidak dapat meletakkan data yang tidak kelihatan. Jadi apa yang berlaku jika saya mempunyai pergantungan sensor untuk ke kaki saya, kaki saya, kaki saya, kaki saya, anda ingin meletakkan semua jika ada network bandwidth. Anda tidak dapat mempunyai pergantungan. Jadi kita mempunyai pergantungan network bandwidth. Anda mengambil banyak data, anda mengambil banyak data, tetapi anda dapat meletakkan jika anda mengambil banyak data untuk meletakkan pergantungan dengan segera untuk meletakkan kembali kembali kembali kembali. Jadi itu adalah masalah lain. Apa masalah lain yang anda boleh fikirkan? Scalability. Scalability? Ya? Baiklah, apabila kita bercakap tentang Scalability, ini sedikit kelihatan. Kami mencari... bagaimana saya mencari Scalability? Ia seperti... centralisasi. Anda mengambil semuanya ke luar. Anda mempercayakan Amazon untuk melakukan semua prosesi di sana. Jadi bagaimana ia berlaku? Ada sebuah seluruh bilik yang membuat semua prosesi data, etc. Di tempat yang penting, di tempat yang segera. Jadi masalah itu adalah apabila seseorang mengambil segera, apabila seluruh bilik dan apabila ia salah. Jadi itu adalah masalah decentralisasi. Jadi decentralisasi... Apabila kita bercakap tentang decentralisasi, kita bercakap tentang decentralisasi data. Dan decentralisasi komputing kuasa. Jadi jika kita mempunyai decentralisasi komputing kuasa, itu bermakna... bukan mempunyai segala prosesi di satu tempat, anda boleh mempunyai... sedikit di sini, sedikit di sana, etc. Seperti untuk mempunyai informasi, mempunyai data decentralisasi. Jika anda tidak mempunyai satu tempat mega yang mengambil semua data anda, ia menjadi segera. Dengan komputing iot dan cloud, kita mempunyai... tiga problem besar ini. Jadi tiga problem besar ini... sekarang kita membuat pilihan untuk... teknologi baru. Sekarang kita akan melihat bahagian baru... yang mengubah tiga problem besar ini. Okey. Jadi daripada sekuriti cyber dan decentralisasi, bagaimana kita mempunyai? Kita mempunyai... blockchain. Baiklah. Ada sesiapa yang familiar dengan teknologi blockchain? Jadi anda mempunyai perkara seperti... okey, jadi... pasti... okey, sesiapa yang boleh beritahu saya blockchain? Di sebuah terma yang paling mudah, dalam sebuah perkara yang satu. Betul. Jadi, data decentralisasi. Jadi, idea blockchain adalah... saya memperkenalkan informasi dengan anda... dan saya memperkenalkan informasi dengan anda... dan informasi di mana-mana. Kemudian, eskipun, menjadi lebih aman... bahawa orang lain boleh beri data... tanpa mempunyai... segala gaya bahagian besar... dalam segala data honking. Jadi blockchain adalah... sebenarnya blockchain, kerana apa-apa pun, anda mempunyai... blockchain dan kemudian anda memperkenalkan... kemudian anda memperkenalkan... kemudian anda mempunyai blockchain. Dan kemudian, tentu-tentu... sebuah perkara yang lain dari blockchain... yang anda mungkin dengar tentang... adalah sebuah perkara yang sukar. Jadi, anda memperkenalkan... kembali-kembali-kembali... kembali-kembali-kembali... tetapi, kita tidak memperkenalkan itu, okey? Kita tidak memperkenalkan... apa yang saya ada di sini untuk bercakap tentang hari ini. Tapi, hanya untuk berikan anda idea blockchain, apa yang anda gunakan blockchain? Jadi, perkara seperti cryptocurrency, bitcoin, Ethereum, semua perkara ini... di mana... anda memperkenalkan... teknologi blockchain. Jadi, blockchain adalah perkara lain yang anda akan dengar. Jadi, sejauh yang anda beritahu, seorang investasi akan berkata, oh, saya membuat blockchain itu... okey, mereka akan berkata, ini menerima duit saya. Jadi, perkara yang sangat berlaku... adalah perkara yang berlaku dari... secara biasa dan decentralisasi... atau bahagian yang terbaik. perkara lain yang berlaku dari ini... jadi, blockchain adalah perkara... seperti konsepnya, kan? Di mana anda memperkenalkan... perkara yang lebih dari software... bagaimana anda mengawal data, etc. Apa yang akan berlaku... tentang kuasa komputing? Jadi, perkara itu berlaku... untuk perkara ini... disebabkan... kuasa komputing... dan... apa? Kuasa komputing... okey... sesiapa yang tahu... okey, sesiapa... okey, tunggu, maaf, saya rasa saya minta ini, hanya seorang yang tahu... kuasa komputing... okey, sesiapa yang tahu... kuasa komputing... tangan tangan... tidak? okey... tak mengapa. okey, teman di sana... yang berkata, tentang kuasa komputing... anda sangat... berkata-kata... berkata-kata apa yang anda faham. Sangat bagus. Jadi, yang dia cakap tadi... untuk mengubah data... untuk kuasa komputing... okey, kuasa komputing... benar-benar... berkata... okey, anda mempunyai... okey, saya akan bercakap tentang kuasa komputing... dan kuasa komputing... tiba-tiba. Kemudian, Lita akan beritahu... perbezaan antara dua. Jadi... um... terutama di sini, anda berkata, sekarang semua data diberi di laut. Kemudian anda ada... pembentangan... pembentangan... pembentangan... pembentangan... anda. Jadi, ia adalah tanda-tanda, etc. di mana pembentangan... pembentangan. Dan apa yang ada di antara pembentangan... pembentangan, dan pembentangan. Jadi, untuk memasak... pembentangan... di tengah-tengah, ideali lebih jauh... ke atas... pembentangan, yang menghasilkan data yang menghasilkan komputing-komputing itu. Jadi idea komputing-komputing atau komputing-komputing itu adalah bahawa ia adalah platform hardware fisikal yang sebenarnya mengagumkan semua data nitty-gritty sebelum menghubungi ke cloud. Jadi apa maksudnya? Ia bermakna yang pertama, ia mengalami network bandwidth. Kerana jika saya mahu melakukan prosesi real-time untuk menghubungi data seluruh jalan ke cloud, ia mengambil beberapa sengaja untuk pergi ke sana dan kembali. Jadi dalam keputusan keras, dalam menghubungi kepercayaan dari, kita katakan, kerajaan. Apabila kita perlu bergerak dalam sejujurnya, atau dalam pemerintah kerajaan, apabila semuanya telah dibuat dan sebagainya, dan pemerintah kerajaan dalam pemerintah kerajaan, anda tidak dapat menyebabkan, oh, satu sengaja di sini, satu sengaja di sini, satu sengaja di sini. Ia perlu pergi ke real-time. Jadi itu menerima atau mempunyai pemerintah kerajaan. Jadi bagaimana perbezaan antara pemerintah kerajaan dan pemerintah kerajaan? Okey, ini ialah interpretasi saya. Jadi pemerintah kerajaan adalah... beberapa orang okey, ia cukup kontroversial ini. Pemerintah kerajaan adalah, pada opinion saya, Sysco cara mengatakan bahawa, tidak, ia bukan pemerintah kerajaan. Kita datang dengan konsep pemerintah kerajaan, seperti banyak tahun lalu, kita mempunyai di atasnya. Jadi pemerintah kerajaan, pemerintah kerajaan, tidak ada sesiapa lagi. Pemerintah kerajaan. Pemerintah kerajaan adalah apa? Pemerintah pemerintah kerajaan menganggap. Jadi biasanya kita dengar pemerintah kerajaan, biasanya orang pemerintah kerajaan oleh Sysco. Okey. Jadi, kenapa pemerintah kerajaan? Ini ialah teori konspirasi saya. Ia adalah, anda mempunyai pemerintah kerajaan, dan kemudian anda mempunyai pemerintah kerajaan. Yang sedikit rendah daripada pemerintah kerajaan, ia pemerintah kerajaan. Okey, tidak apalah. Jadi, saya betul-betul suka pemerintah kerajaan. Jadi pemerintah kerajaan, jika anda telah mengikut media, hanya baru-baru mempunyai pemerintah kerajaan selama 5 bulan. Jadi, pemerintah kerajaan menganggap, pemerintah pemerintah kerajaan, bagaimana ia berlaku. Sebenarnya, ia berlaku secara sederhana, dan ia akan menjadi perkara baru. Siapa-siapa IOT atau AI startup yang mempunyai pemerintah kerajaan, akan mempunyai pemerintah kerajaan. Jadi, pemerintah pemerintah kerajaan adalah data agregasi. Jadi, apa yang kita lakukan di Smart Car adalah kita membangun pemerintah pemerintah kerajaan kita sendiri. Jadi, saya dapat tahu pemerintah pemerintah kerajaan seperti 1 tahun lalu, dan termasuk bahagian bahagian itu tidak berlaku. Jadi, saya mempunyai pemerintah kerajaan saya sendiri yang dipanggil Distributed Intelligent Network System yang mempunyai pemerintah kerajaan. Jadi, sekarang apabila saya pergi ke mesyuarat, saya beritahu mereka, oh, kita membangun pemerintah kerajaan, mereka sangat membangunannya. Jadi, pemerintah pemerintah kerajaan, kerana ia mengambil kegerakan, kita mengadakan masalah dan kekulangan. Anda boleh menghidapkan, mengikuti sebuah hidra, dengan satu pemerintah kerajaan yang menghidapkan kemahiran, mempunyai banyak kejungan, dan kemudian, pemerintah pemerintah kerajaan yang menghidapkan kemahiran yang tersebut, kemudian, menghidapkan kemahiran dan kemudian, menghidapkan kemahiran dengan banyak kejungan, dan sebagainya. So what you have is your own distributed intelligent network system i.e isolated computational system So I've been in talks with let's say the government in Singapore etc to actually form the whole distributed internet distributed intelligent system for various projects So what's the benefits of it? It's that it's very isolated So all the processing happens on these devices itself It doesn't get sent to the cloud You may not necessarily connect it to the cloud You can So what's the limitations? It's that it's scalable in terms of you can connect one device to another So if let's say one device has 1.5 teraflops you connect it to another device, it has 3 teraflops But in terms of analysis of historical data, it's not that good because it's limited to that processing power and it's limited to the onboard memory So that's the limitation So in terms of let's say I want to store I want to archive a lot of this kind of data and like the video network like if you're setting your cameras to like 60 frames per second you want to store every single data you cannot just rely on the hardware itself on-site you need to kind of like send it elsewhere So that's where the cloud computing comes in high performance cloud computing So you got a good idea of edge computing So talking about the hardware Who here actually knows about DNA computing and neuromorphic computing? Do you have it on neuromorphic computing? Okay So these are like new passwords that you're going to hear probably in the next one or two years It's starting to get traction already Right now it's in the R&D phase Morphic computing I'll just quickly go through with you so that when you actually hear about these terms it doesn't come as a surprise Alright Okay, wait Before I talk about neuromorphic computing and DNA computing Under edge computing and fork computing You have three things Okay You have another three buzzwords that you're going to hear very often First one is GPU Next one is FPGA The other one is A6 Right So Right now the buzz is GPU A6 is more for blockchain mining Alright Okay So GPU Anybody knows about GPU I'm sure The gamers Okay, most guys Okay So GPU is pretty much You have your CPU which that's your whole computation And then you kind of like outsource it to the GPU So one is the master and one is kind of like the slave So you outsource the task to the GPU The GPU is very good at parallel processing So it can actually process Imagine like instead of one processor doing the work It spreads it out into like multiple tasks So it can assign different threads within the GPU to do it So it actually happens what we call parallel processing And then once it's done you bring it back to the CPU So previously I mean AI and neural nets have existed since my parent's generation But it's only because of the recent breakthroughs in hardware computation that it can actually power all this kind of data processing in real time So the GPU Right now the leader is Nvidia Of course they are facing tough competitions More people are coming into the game The big boy So like ARM ARM has like ARM Mali You have AMD with their Ryzen and like a lot of other more players are coming into the GPU space So GPU I think right now is number one Although people are exploring FPGAs now for in terms of hardware things to do AI computation So FPGA in my opinion is very tough to do Not worth going into yet if let's say you're a startup Maybe as a researcher it makes sense But FPGA because FPGA is super duper hard to program on And to be able to program FPGA and to be able to program deep learning stuff you kind of need to be competent in both of them And it's very rare to even just find someone competent in programming deep learning stuff So FPGA plus deep learning it's just like really really bad combo You can't really find this talent in the market right now or if you can it's very rare And then A6 it's kind of like a custom board that's only built to do one specific purpose So in terms of let's say for blockchain liners or Bitcoin mining Have you had a Bitcoin mining It's people trying to mine bitcoins to validate stuff So of course when you want to validate it's a lot of by brute force kind of attacks and it's just a lot of processing power that's why people kind of have like Bitcoin mining farms based in China mostly So they built specific hardware just for one sole purpose which is kind of like Bitcoin mining etc So you can also build a board with one sole purpose to perform AI But the thing about the A6 board is that once it's done it's done that's the only purpose it's like what is my purpose just that it's quite sad but then it's possible So these are like some buzzwords that you're going to hear for the hardware architecture for for computing and edge computing So as promised I'll bring you very briefly through neuromorphic computing and DNA computing Okay this is really very avant-garde to be honest no legit product out there in the market right now but a lot of research is within is it IBM? I think it's IBM IBM true north is it or true sense or something like that Okay the idea of neuromorphic computing So ASTAR in Singapore has been looking into it I have a friend who was leading the department for neuromorphic computing So neuromorphic when you talk about the word neuromorphic neuro it reminds you of something like in your brain your neurons in the brain So the idea of neuromorphic computing is how do you actually replicate the way your brain works the cells that in your brain how it works into computer So it's a bit like neuroscience Is it called neuroscience? I think so So the idea is every single cell in this case hardware device will have its own processor and its own memory So right now if you look at a computer it's like there's one processors here and then there's a memory here but it's not on the same thing So a very vague concept of neuromorphic computing would kind of be like how you would scale with each computing devices It's like you have one each computing device connected to another each computing device and then it's here and there I think that could be considered as a very early stages on neuromorphic computing because every each computing device have its own processor and it has its own memory chip So that being as one single node talking to another node with its own processor and memory it's kind of like like one brain cell talking to another brain cell but I mean in the most ideal situation every single like every single thing within a chip a neuromorphic chip would have it like that so not so big but like its own it's all made into a small little silicon chip So that's a really interesting concept of neuromorphic computing Nobody actually really nailed it down Was it IBM TrueNode for something you can Google it to find out more There have been the leaders in terms of this neuromorphic computing but they say that they have it but nobody actually seen it Nobody outside of that company has seen it So it's like you don't know it's lying just to push up stock prices Okay The next part DNA computing Ah DNA computing has been tied to the concept of memory Right Let's talk about memory memory storage data storage Yeah, people have been obsessed over how you store data I mean the last time you've been stored like one big floppy disk can only store how many bytes and then now you can have like hard drive SSDs and now you have M2 SATA and stuff like that DNA computing pushes it one step further Okay, so if you recall DNA in your DNA cell your helix formation you have the ATGC pairing So A can only pair with T and then G can only pair with C or vice versa So because there's only AT or GC very much like binary you have 10 or 01 So it becomes very simple it's like okay if let's say you can encode the ATGC pairing into 1s and 0s because it's either that or it's not that right you can actually reduce the size of your computational whatever right So it's like imagine if one drop of blood you can really store so much information because there's a you know tons of DNAs within like a a single drop of blood but of course right now we are not testing it with blood they're testing it with liquids gels and fluids so it's very interesting they have already done DNA computing it's already like out in the media you can research about it it's done the thing is that it's very expensive to be able to do DNA computing right now it's just a matter of time I think that they reduce the cost of DNA computing in terms of the and then you'll make it like more accessible to people so it's just a matter of time I think that it becomes a public thing but yeah it's a very interesting topic you can research on it yeah give it another maybe 5 years or something then it hits a public market I think so of course it needs to be biologically safe and stuff so yeah okay so how do I reduce this size okay so we have all these buzz words now let's look at the data processing side of things so with IOT we are collecting a ton of data right hmm okay I'm going to just draw it out here again just the gist of it because we are lacking some space now let's talk about data processing we have a few more buzz words and that would be data mining your IOT which generates the data and of course how can we forget our dear open source data right everyone loves open source okay so we have a ton of data coming in the next big thing is kind of like okay great how do we process the data so when you talk about data processing a few things come into mind alright I'll just put it all here first thing about data processing can you do in real time sometimes real time doesn't really matter alright if let's say I want to just monitor some data and then look back at it in like maybe two weeks or something like that some people don't really care about real time but it is a buzz word definitely whether you can do it in real time or not and then also you're looking at the integrity of the data processing the integrity of it and so on and so forth when I talk about integrity I'm talking a little bit about the security of it and you know since you're handling so much data whether you actually get lost and the most important thing about data processing is can you get meaningful data out of it so with data processing you have your new buzz word which brings us nicely back home to artificial intelligence so I'm sure you heard the terms like big data big data processing where people come up with you know programs to try to make sense of data make sense of data with Excel sheets with proprietary algorithms with data visualization there's a ton of startups just focus on data processing so artificial intelligence is just another branch of startups that do data processing no big deal really so I'm going to clarify some things about artificial intelligence because right now it's a big hype people seem to think that it's black magic they think that it's going to take over the world but I mean it's not one thing I realize is that whatever concepts that men don't understand they think it's black magic and it's going to take over the world and go like oh it's god but honestly it's not okay so a lot of artificial intelligence you're going to hear things like it's rule based systems logic based systems etc a lot of it is still rule based systems so yeah okay okay anybody know the difference between machine learning and deep learning no? yes correct so artificial intelligence is this big umbrella that say it's a artificial intelligence it does its own data processing it can do supervised learning and unsupervised learning artificial intelligence it's kind of like the big umbrella of things on how it can actually process by itself and then under artificial intelligence you have machine learning so machine learning has existed since my parents generation you know it's a lot about feature based extraction and things like that and then under machine learning you have deep learning so deep learning is a subset of machine learning which is a subset of artificial intelligence does that give you a very good idea so artificial intelligence is a very broad and generic term that we just call this whole genre of data processing it has been around for a very long time a brief history about it is that we have gone through two AI winters so what's an AI winter so in the past I mean during World War II I mean you have heard of the great Alan Turing and stuff like that so the Turing test way back in the day was that if let's say I am able you know last time they have this like machines where you tap-tap-tap and then like you're talking to somebody else but it prints out in a carbon paper a little bit like a typewriter kind of thing so it's a little bit like a combination of a typewriter and effects machine in real time conversation with the other person so in the Turing test last time it's kind of like you tap-tap-tap if you are unable to distinguish that you are chatting with a computer and you actually think that it's another person then that passes the Turing test so that's really the early stages of artificial intelligence in terms of natural language processing so NLP, that's another buzzword under the subset of AI that you're going to hear so natural language processing what we have now is of course chatbots, right when you type on like a help support kind of thing on the government page you know there's a chatbot replying you or in terms of your steering on the phone etc those are under the classification of NLP so back then Turing time you have this whole boom of AI innovation and then you have a ton of movies that come out about AI like Matthew Broderick in War Games and stuff like that and then the problem happened was that what the problem with it is that it was actually making good progress but when the VCs heard about it, they'll be like take my money, take my money and of course when they say take my money they say okay promise me something promise me that you're going to build this and this and this and this and the startups kind of over promised in terms of the things that you can do it's kind of like the VCs who were investing in them did not have a very clear understanding on how the technology work so the startups back then ended up over promising they put out all their funds and then winter just like nothing and then after that a whole new wave came up again I was like okay no, AI is still good let's revive the whole AI movement and then the VCs again was like take my money, take my money over promise winter again so men didn't really learn from its mistakes twice it went into AI winter so you can actually wikipedia history of it so right now are we going to go into another AI winter because now there's so much hype about AI again I personally feel no because the guys spearheading the whole AI movement people will actually know know what they're doing uh oh yeah, who know what they're doing so there are people leading the frontiers of the AI innovation curve, it's kind of like google facebook with their open source projects and now a lot more people have a clearer understanding of what AI is how AI works so later I'm going to teach you how to talk to an AI geek you know, you don't tell them oh build me this, build me that but how do you actually talk in terms of AI terms so yeah, that's a brief okay, did I miss anything? no, okay very nice segue, okay how do you actually speak AI hmm okay, let's not go into the tacky tacky part first but let's talk about what it can do and what it cannot do so then give you a very good example of what it cannot do okay, it's already I mean in terms of what it cannot do I went into a meeting with the head of the innovation department oh shit, I shouldn't say this okay, cannot scrap it right, NDA okay, I went into a meeting with this very big short guy this is a different case, okay by the way so I'm meeting with a very big short guy and then he says like, hey Annabelle I want you to replicate the amazon gold store in Seattle for me you know the one where Yeshee can pick up a bottle of water and just walk out of the shop and it just bills to your account they use a computation of sensors and computer vision and all this kind of things and he says, I just want it with computer vision oh by the way computer vision is a subset of artificial intelligence the official term for computer vision is IVA, Intelligent Video Analytics so purely with IVA and I was like me, just me just you and I was like Amazon is a very big company with thousands of employees and their business is pretty much gathering data and I was like, yeah I know you see, they didn't come up with this overnight they probably working on it for the past 10 years and in the past 10 years they probably didn't have the innovation that we have today you are really the frontier of Singapore's AI scene, I think you can do it and then I'm like, right and you just want it with computer vision and nothing else so I was like, yeah I was like, nah ah, nah ah I'm not going to do that I mean even so the user experience of it let's say you have a camera pointed there right in front of you there's limitation with camera I'll talk about that later for IVA but even so how do you actually go into a meeting AI technology you just meet people who don't know how to speak AI so the thing about how do you speak AI as an AI practitioner I feel that it's your responsibility to actually be the one innovating the products you cannot rely on the person from within the company to say build me this because most of the time when you tell them they'll be like oh you sure you want to ask me what I want I want the moon it's a very conceptual idea of AI that they have but when it actually comes to deployment they have no clue on how it works they think it's some sort of magic that as long as you see in a movie it can be real like with any technology it has its own limitations so the core essence of AI management you are looking really at the features so for example if I have a model that is trained in facial recognition something simple that we are familiar with facial recognition to be able to recognize a person's face it is not the same as a facial expression model so yes it's still looking at the face but facial recognition is pretty much saying hey I am able to compare your face with the criminal database I am able to distinguish that your eyes are here your nose is here, your mouth is here and how big it is probably and maybe based on facial recognition I can also because by taking the average the average distance between 2 people's eyes is more or less the same using that as a scale I can actually estimate your height it might not be the most accurate estimation so that's a plus feature in terms of height that one is not really deep learning but it's more like a data processing which is what AI is pretty much about so the actual model facial recognition is one model facial expression is a different model to be able to detect how many percent happy how many percent sad how many percent angry there are two different features so to just say I analyse faces with deep learning or AI it's too generic so in terms of features to me it will be very specific so when you say if let's say I'm a startup how are you able to if I am the end client and there are 10 different startups in front of me that says I do facial recognition how do I know which startup to go with who do I actually pay the money to all 10 and then have 10 duplicates of facial recognition software how do I actually choose and decide that this startup for facial recognition for AI is better than the other one so of course different companies have their own different ways of judging for me I judge it based on just two criteria the first one is how accurate is it so accuracy would mean that if the camera is here I scan it from here and turn my face a little bit maybe 45 degrees I still able to recognise my face or even if it's front of view how many percent what we call confidence interval how much guarantee can you give me that it is actually my face and not somebody else's face so accuracy is a very big thing the next one that I judge it on is latency so latency is pretty much you want it in real time if you're looking at facial recognition if you're comparing it to criminal database and you're having like monitoring like MRT station with like thousands, maybe hundreds of thousands of people walking across all the MRT station at the same time you need to be able to do it in real time so when you talk about latency usually the thing that gets compromised is the quality so quality kind of like the thing when you do IVA is that you have to pick your camera sorry you have to pick your camera very carefully the reason being is that if your data input from the cameras the pictures that you take it's already not clear it's not readable like let's say the lighting condition has shadow or something like that or it's a bit blurry if your data input is already not clean accuracy or something like that or I cannot guarantee various other things so if let's say I want it to have real time computation in terms of latency usually the thing that gets compromised is the size of the file because they don't have the data computational power to process so much high resolution files and when you play with IVAs when you're using special machine vision cameras and I actually set the frame rate per second for the camera so video is actually just a series of photos usually your eye as a very nice benchmark your eye sees at 24 frames per second so to be kiasu you want to set it at maybe 30 frames per second or if you're even more kiasu you can go all the way up to 60 frames per second so that's really super high speed and when you're taking images at 1080p 4K you're saving like 60 frames of 4K resolution 1080p resolution per second and expect to process the whole thing in real time so in terms of real time processing in terms of latency this is the number one thing that gets compromised all the time is that people have been trying to stinge on the quality of the input maybe they take fewer frames per second maybe they just reduce the resolution of it so the latency usually that's another thing to judge it on so that's what I judge it on at least so yeah just two things and that's really enough for me to distinguish which is a better facial recognition company so yeah okay so back to the topic on models so let's say I specialize I am the world leader in facial recognition technology how which companies which industries should I approach so one thing you'll notice is that a global trend for AI when it first started is that people will obsessive face you know facial expression facial recognition and so on and so forth and then the next thing they went into which was very very hot very hot industry was surveillance so surveillance in your home in public areas surveillance in real estate buildings and then people started getting creative they said why not let's do using facial recognition and facial expression let's go into retail, retail analytics let's profile the consumers so when they walk in a store you can kind of profile things like age and gender based on facial features and things like that all these kind of things extrapolated based on the data you get using their own proprietary algorithms so they've been very creative in terms of the industries they go to so by possessing one model in terms of one feature you can already target a lot of various industries so one thing we realized that we observed in the AI scene globally is that people tend to stick to the model that they're good at and then sell it to multiple industries and that comes the problem of a mismatch because if I am an client and I am let's say a security company like Cisco or something, I don't want just facial recognition I want to go to a startup or a company and say I want to monitor a lot of things of the security of this area I want you to do body posture recognition I want you to do facial recognition I want you to do facial expression recognition I want you to do text extraction object detection on whether they're carrying a luggage bag or purse or something like that and then like these startups, they're pretty much just focus on just one model so they're all scattered all over the place so this big end client now they have to do their due diligence to go and find out body posture, which one is the best for body posture recognition who is the best and everything and that's problem number one it's all scattered around and they have to do their due diligence to search globally for that specific feature and the next one is integration so in terms of when you program on AI there are a few different frameworks that you can use you might have heard of tensorflow you might have heard of cafe you might have heard of torch and various other frameworks so these together it's another pain so in terms of adoption of AI technology there is definitely a lot of hype but give it maybe another one year or two years before people are really ready to adopt right now people are still excited about IoT it takes a while for it to catch up so yeah I'll give you some examples of things that you know I've come across in terms of features so the very beautiful thing about deep learning is that you can custom train models so when if you recall Singapore had a red problem last year at Bukit Panjang so after Bukit Panjang the whole red problem thing that was the reds move to Bishan and then ST Engineering contacted us and say like okay we are able to collect data of the reds in the gutters with thermal vision cameras you hook under the drain cover you can actually collect the images of the reds and things like that but I'm not going to manually see through the images to actually count okay this is red number 1 this is red number 2 for 2 weeks or 3 weeks worth of data so the goal was actually to identify which is the most frequently used tunnel by the reds and then deploy the poison but it's going to be very expensive so it's better if it's targeted to the most frequently used one by the reds then also what time so they installed the thermal vision cameras we took the images like 100,000 images trained it to actually recognize that that is a Singapore red so Singapore red not German red so one of the features that we actually base it on maybe it's a tail length the average outlook but by feeding the so you see when the thing about AI is that you cannot say like identify a red and then like this magically you need to actually train a model out of it so that's what we'll do if we have time I'll go through a little bit of the technical side on how something gets trained but the thing is that you feed it with a lot of data for reds one problem about data is that if you feed it an image the first few images at least you must label it so within the image you must actually say okay this thing here is a red it is not two reds together you cannot assume that the computer knows that that is one red you must be able to label it and go like okay this is a red and then that is another red so just by taking pictures alone yes it is very good to have clean and clear images but then the next problem is how do you label the images so for 2 or 3 days you are just staring at 100,000 images of reds not very fun but at least they are thermal images so it is not that bad so you are labelling at images so we can do custom trained models so let me give you some other examples something fun Singapore Zoo we are approached by Singapore Zoo to do this project we are still in talks about it on how to differentiate Kiki the Monkey from Coco the Monkey from Kakak the Monkey so the idea is that behind each monkey they have a story on how they got rescued and rehabilitated so if you print it out on a nice card and go like oh Kiki the Monkey or separated at birth and Coco the Monkey is the brother of Kiki the Monkey is that Kiki or is that Coco it's like they are unable to differentiate it so the thing is we are going to work with them to train it to recognize that okay this one is Kiki this one is Coco and that one is Kakak so initially as a proof of concept you just take a they have this mobile app, you take a photo of it and then you are able to distinguish Kiki Coco or Kakak the Monkey and then if everything goes well we have a transparent LCD screen so like you know you have like let's say a raccoon enclosure so it's like 10 raccoons running around if it runs in front of that transparent LCD screen you'll be like zoop this is Emily, the raccoon Emily was rescued at birth from a forest fire in Australia so it's pretty fun to be able to pick up the nuances that the human eye can't it's also another plus for AI another thing is that to give you, I mean in one cases let's talk about more economic cases so in terms of industrial manufacturing industry 4.0 movement if you've heard about visual inspection so once you manufacture component let's say we have a client who manufactures parts for Boeing Airlines when the components can manufacture they must make sure that every single thing is accurate you cannot afford it, it's Boeing or even if it's like electronics 100 chips on it like this poor lady in the factory is just inspecting oh is that sorted, okay cool and then it's just coming in real time so there's a pressure if she just zones out for a bit the accuracy of that assurance just because you have a person there it's not very assuring so with computer vision and deep learning we are able to actually pick up so very funny I was at one of the factories of one of the biggest silicon manufacturers in Singapore, they are listed they walked me around the factory and they were like, okay Annabel wherever you can see your AI technology being implemented, please stop me and then I was like, sure because right now there is already a machine that does that it's not new technology for PCB inspection it's not new honestly but it cost $700,000 so it's usually reserved for the manufacturing line that gets highest volume per day oh okay either like high volume per day or the highest value so they are like, okay wherever you see just stop me because my board just cost $2,000 $700,000, $2,000 so they are like, okay can be implemented here and then we stop and then the guy was like very good, that means I don't need these two people and then from inspecting the PCB board they just looked up at us and I was like, what? and I was like, no lah, no lah we have deployments somewhere, no Singapore foreigner ratio and everything and I was like, right so a lot of times people think AI is going to take over jobs and everything but I think it's going to complement in some ways because we are moving to a higher level education no kid it's going to go like, mommy when I grow up one day I want to be a PCB inspector they are facing hiring crisis even like the young generation when they grow up today they don't want to do this kind of linear jobs they want to do something more glamorous so they are facing the crunch also and also we are complimenting them in terms of the accuracy that they perform because the thing about AI is that it can tell you what's wrong but to fix the issue you still need men on the ground to actually unsorder the component and put it back or actually to put it one side and actually do the fixes to it or let's say security to be able to detect that's a terrorist and then to send a man down you still need a physical guy speaking of terrorists, something very interesting so in terms of security and AI etc so we talked about models so let's say the model is I am able to detect that that is an object that is this man's pose is like that and this is a pen and this is a mouse very often when you talk to the end client they'll be like okay very good put all these models together and you'll use it for terrorist detection so do you want us to do the whole thing for you because it feels like you want us to do the whole thing they will come to a point where they say no I don't want you to do the whole thing because it's very controversial how do you classify someone as a terrorist just because of skin color just because the guy is in a darker shade of skin color maybe because he has cloth all over his face it's kind of like they're not going to tell you or even less if for consumer profiling you have like a thousand people walking in front of the build board how are you actually able to say that he is the one that's most likely going to purchase my items from a mess or something like that all this secret source for the business logic they're not going to tell it to you so even as an AI startup one of the problems is that you don't know the problem statement of the industry how much time do I have left? 15 minit okay cool are you still okay? can I go into a little bit more tacky side of things okay? yes okay do you have internet here? wireless sj okay I'm just going to very quickly tell you how AI works in terms of the tech side of things I'm not going to delve very deep into the various neural net models but at least you kind of have an idea of how it works okay my hotspot my phone is in okay cool now I'm just going to make like if my gestures hasn't been exaggerated I'm going to make more exaggerated gestures okay okay cool I'm going to do it facing you so yeah you have your input input layer and then you have your output layer so the idea is that I'm going to feed it with some information and get something meaningful out of the whole thing so what happens in between so I am like what we call the hidden layer so input layer hidden layer and output layer so what we are looking at right now is a single layer neural network you feed it with a layer of inputs some processing happens and then output why do I say layer and not thing because a layer actually can be several things so let's say I want to make the perfect laksa you know I run a laksa shop and then I'm like okay what are the ingredients for laksa coconut milk and then you have curry and then you have various other things and then at the end of it I want to get the recipe for the perfect laksa that everybody likes so that's my final output so in the hidden layer it does the processing so now we are going to go one step a little bit deeper bear with me you have weights assigned to the ingredients so weights can also represent importance so if let's say coconut milk okay we realize that the more coconut milk we add, the happier people are so maybe coconut milk have a weightage of let's say 10 points plus a combination of let's say 2 units of curry sauce 3 units of lime leaves etc you have now aggregation of coconut milk plus lime leaves plus curry sauce so now you have the aggregated thing but that's only use case scenario number 1 what if I find this most appealing to me if somebody else like something else which is also made out of a product sum and product of the various ingredients but maybe they place different emphasis on different units so now you have the hidden layer you have different possible outcomes so that's it's kind of like okay ya there's this scenario here and then you can't have this thing in the hidden layer you actually compare with the final one the final laksa that actually comes up so okay now I have the perfect business model for the best laksa recipe using this, I start selling my laksa and then I realize based on customer feedback it's not 100% hit maybe I'm selling 70% of the laksa and not 100% of the laksa so I need to make some adjustments to it okay there is the difference between the actual result of the laksa the actual one that we are expecting to happen versus my theoretical one the difference is what we call the cost function so taking this cost function you can actually adjust go back, trace it back adjust the importance, the weights the various ingredients you keep adjusting back and forth so this is where we have the learning process happening so taking it to the computer's context we are not looking at just one single layer we are looking at facial facial expression maybe one layer is the eyes one layer is the nose or maybe one is vaguely detecting the edges of the face let's say within other than the edges of the face you are looking at the edges of your eyes and things like that features so there are tons of layers comparing the actual data that you are expecting to get back and forth you are able to adjust this process automatically without somebody actually doing it so that comes the idea of supervised learning and unsupervised learning you can go and search up a little bit on it unsupervised learning is very interesting but a lot of the times it's what we call rule based just search up on the different layers of artificial intelligence as in different layers of neural networks one thing I wanted to show you is that the different models that are available out there so neural networks can get very complicated every day maybe every day or at least every other day or every month new kinds of neural network models are forming so what I showed you is a very linear one input layer hidden layer maybe 2 hidden layers maybe 10 hidden layers maybe 100 hidden layers very recently some very disgusting yet beautiful in terms of artistic value what neural network models look like so this is very famous thing the zoo so previously I was talking about this neural network it looks something like that you have your input layer your hidden layer and then your output layer so this is like my ingredients for laksa with the arrows, I am assigning weights to it so you see pointing to this hidden layer it's actually a combination of all 3 it might not necessarily be all 3 maybe it's just 2 for some of them and then based on all this you can have output it can be 1 output, it can be 2 outputs it really depends and of course you get more complicated ones it depends already on what is the kind of information that you want to process so this is a very linear each one the zoo is pretty interesting you get fun ones that look like that and that so I mean depending on the kind of neural network model that you are using you can really push the boundaries of it so we have this thing called Generative Adversarial Network GAN where you are able to actually okay so GAN is something very interesting okay they didn't specify okay so use case application for GAN it's kind of like I am just going to feed the information there is a girl in a black room holding a pen and then based on this text saying that there is a girl in a black room holding a pen they are able to actually come out with a visualisation of a girl in a black room holding a pen and what kind of pen they will also just pick and choose one of them so depending on the kind of neural network that you use you are able to actually come out you know historical analysis some will be various other things so yeah GAN is pretty much I think it will be used very widely in VR and gaming initially and then the joke that we have in the AI scene is that GAN is only useful for one thing porn industry but don't know why ignore is it that keep it to yourself at least as you do with all porn okay I think I am running out of time but I will very quickly summarise AI is definitely a buzzword but don't let it over excite you we are still very the hype precedes the actual technological innovation that is actually happening you hear a lot of talk about autonomous cars you hear a lot of talk about you know what it can do but the actual deployment of it there is a lot of barriers that we need to overcome first the most basic barrier to overcome is education knowledge in terms of the people who are gonna be implementing it because many a times you see your attention please help us make the library conducive place for learning for all our users parents so so yeah there are a lot of barriers to entry in terms of deployment of AI so it's one thing to be an enthusiastic hobbyist who wants to learn about AI but then you ask what can I actually do with this technology so if you are to get started on AI I would recommend this approach that is the least that burns your wallet so you want to try to invest as little money into it until you are comfortable talking from a hobbyist level go and search for things like google vision api so these are open source stuff that you can just integrate within your code so things like text processing natural language processing video analytics, object detection all this kind of things it's really simple it's just an api so once you are comfortable playing the api how it works and how it doesn't work then play with the cameras because now that you are comfortable with the software you want to have more control over the data that you are feeding into the software so I'm assuming that when you are playing with this software the google vision api and various api cloud services you are using off the web images that you find you are not really generating your own data even if you do it's just very basic level so I want you to explore more into collecting your own data so in this phase you are going to question yourself if you are looking into vision analytics is that do I need an ip camera do I need a web camera or do I need to spend $1000 on a machine vision camera how far do you want to push your limits to and then once you are comfortable with that then I want you to look at building your own models so the thing is that you play with your own models you don't want to be so reliant on the secondary party like google or facebook etc you actually want to try building your own neural net your own model so that is a very challenging part so it really it's really good if you have a strong foundation in C++ and python and then once you are comfortable with that then you explore the hardware architectures of training it so initially all this training i'm assuming you're going to use amazon web services etc but once you're comfortable with that you can look at building your own custom computer or actually purchasing a various device specifically for training or maybe specifically for deployment so each computing devices are pretty much deployment devices to train it you actually would buy like an nvidia titanex or a graphics card or something like that and build your own computer really a heavy investment so don't go there yet if let's say you want to get started on AI so when you build your own models you are pretty much quite up there already you can actually consider doing a startup based on AI a software startup based on AI so there's lots of possibilities right now you'll see that the security industry is very saturated so i would urge you to be more creative i explore the various industries so for example retail industry if let's say everyone picks up this object and go like oh very nice this mouse is very nice and they see a price tag i was like oh shit it's $2,000 i'm not going to buy it all this kind of analytics is not captured within the system so how you program AI software to actually capture this analytics is kind of like first i need to be comfortable with object detection model actually identify that this is a mouse and the next model i need to identify is that this is a person approaching the mouse and how many people once they are able to actually identify that's a person it's very easy for you to do the people counting to say that okay there are 3 people around this mouse at this certain time so object detection people detection if you want to push it one step further you can do a profiling of the person it's a reaction where they actually look at the mouse and then from there it's really just data extrapolation and basic data analytics things like comparing this data with the inventory system to see whether it matches into a percentage that if let's say 100 people linger around this mouse in one hour but there's only one sale is there something wrong with it so that's where the business logic will come into play so if you want you can check out there's a lot of resources for deep learning online there's a lot of resources the most famous one would be the one by Andrew Ng Singaporean yeah Andrew Ng from alright machine learning so yeah there are a lot of resources online you can just do a quick google search for it I would advise you not to skip the basics it does help if you have a mathematical understanding of how it works and from there it's going to help with your entire deep learning process because if your basics is not strong you're going to have a hard time actually keeping up with the developments in AI and it's moving very very fast so a strong foundation will definitely help so I think so yeah thank you