 Okay, good. Hi, my name is Uli. Just in case this is my Twitter handle, this is where you get me best. This is my first time talking at this event. I'm very happy to be here. I know I have exactly five minutes. I promise it might even be less. I'm a geek. I work for a couple of companies in the past that most people know. If you add me on LinkedIn, I've started an initiative called Soton Kitchen where we facilitate the conversation about the technology and the impact that it has on the way we communicate, we work, and we learn. And I also run the Redis user group and meet up in Singapore. The hack that I want to show you today is something that I built a few months ago when I was working at Microsoft. Yes, I'm a geek. At the same time, I like food. I like architecture. I like taking photos, and I travel a lot for work. So we speak a lot about neural networks and artificial intelligence. And most of the time, most of the time, this is about marketing things. It's similar to blockchain these days where it's all about blockchain is doing everything. I wanted to be a little more practical. So I discovered this thing and I wanted to know what if I could have a neural network go through all of these. It's around a thousand photos now that I have on my Instagram feed. What if I could look at each individual photo and distill intelligence from it? Like when you go through my stream, you see, okay, this guy's a foodie. Would the neural network come to the same conclusions? And I give you the answer right away. So this is not something I built with PowerPoint. This is something that a neural network built for me. A word cloud. It's pretty obvious. Big words mean it's a large representation in the data set. Small words like boat for some reason. For some reason, the color red is very prominent if you see that up here. But it's obviously about food, water, city, streets. So pretty incredible. You'll see, I mean, I know most people in the audience are technical. So I work with a lot of people who are new to the concept of APIs. I'm sure you're not. So computer vision, neural networks. I mean, 30 years ago, chess computers were considered artificial intelligence. Now we just use them because yeah, we figure out how they work. Neural networks, I can't tell about, I can't say myself that I know how they work. But the good news is we just need to have an API call. And then we can basically send a task to this thing and say, well, this is a call request, a simple example. You get a key. You upload the photo. And then the machine gives you something back. It's basically like a reverse Google image search. You search for pizza. Google shows you all kinds of photos of pizza. We do it the other way around. We say, here's a photo. Tell me what you see. So pizza, right? So you only send a photo. So there is no exit data, no meta information, no file names. It's really just the photo itself. You send that to the API, a close-up of a pizza. It's the text that the machine gives me back. It also gives me tags, pizza, food, these kinds of things. But you also see there is a confidence score from 0 to 1. So you can say 92 is pretty confident. A cup of coffee, same thing, right? 85% confident. A wooden table next to a window. This is awesome, right? I mean, how does it know it? I don't know. But a sunset over the ocean, 80%. But then again, look at this. This is in Chinese garden, right? A bunch of bananas. So what I decided is 37%, that's not good enough for me, right? So I draw the line. The machine has to be at least 80% confident. And then I basically take all that data. That's already the last slide. Then I take the data, do a bit of cleansing. It's actually some, I'm a Python guy, but in this case, this was really just a bit of shell scripting and a bit of stuff in the VI. Grab SCD AWK and put it into a word cloud composer and that was the result. 27 seconds. What I wanted to say is it was really difficult for me to choose the hack that I wanted to show you because I do a lot of these kinds of things. Maybe you want to remember hacks.sortum.io and you'll find some more. And I'll give you back 30 seconds. Thank you. Any questions? Yes. So how many features, like what was the ratio of the features that you grabbed, 80 plus percent? Oh, wow. I've not thought about this. I haven't counted. I don't know. But I guess it's maybe around 60%, where it was 80% confident. But I mean, if you look at my stream, it's very much, I mean, it's pretty accurate. I never have funny photos in my Instagram. I never have party photos. It's typically a little more aesthetic, like scenery, food, buildings, street life. So no really odd balls. Yes. That is correct. That is correct. And you have the same from Google and the same from IBM, Watson. And I think if you sign up, I mean, first of all, you get like $200 of credit for free, but also they have a free tier. Like if you are below 100 requests per day, you don't have to pay even after the trial period. I think that's where they are now. I don't have the updated numbers, but in essence, you can use it for developer purposes for free to a certain extent. And there is many more. I mean, there is language recognition. There is, you can even now can do handwriting recognition, right? Somebody writes something per hand and says, wow, I do OCR. It's pretty creepy sometimes what you can do with these things. Yes. How specific do the categories get? Does it recognize it's like a Marina Bay Sands building, or does it just say building over the water? That's possible. So it's when you look at the API request and the interface changes, that's why professionally speaking I work for a company that does API management, right? The versions of the API has always changed. And you see that visual features equals descriptions. So I say, dear neural network, just give me the description and the tags. There is, if you, the requests are a little more pricey, but then you say I want to identify landmarks or celebrities. Or you can even have filters like, I'm giving you a photo, tell me is that is that public, is that viewable for all audiences or is it adult only, right? So they become more and more specific. And now even there is a neural network that you can train yourself in the browser. You don't have to be a data scientist. So you say, I upload 50 photos of a specific butterfly and I click train the model. Then I can send API requests and say, here's a butterfly. I know it's a butterfly. Don't tell me that. But is it that specific species of a butterfly? And you can do that yourself without coding. Pretty advanced. Yes. No, not as a, I mean, if you use the public service from in this case Microsoft. No, no, there is not. The interesting part is that I've run the same set of images like half a year later and got much better results. So they are also well, it's like you're basically agreeing to submit the photo to that gigantic pool of it's not serious. What's it called? Yeah, whatever. Yeah, Cortana, right? Yes. And I think she learns by people using it. However, you know, this photo will probably not contribute to the learning. I mean the 37% one. Because there was no information or confirmation sent back to it. I think that's exactly the point, right? There is no feedback loop. So they keep adding more images and optimizing them. Yeah, yeah. But there is ways for you to use the API to train it to say I'm taking photos of every individual of you, maybe five different photos, and I send the request to the service and to a private, privatized instance and says this is Seb, right? And then next time I upload a photo it can tell me I'm 80% sure that this is the guy. But it doesn't mean I'm exposing him to the World Wide Web. It's just for my API key. So there is some ways of training the model. The new version will even tell you that it's cave and dish. I'm sorry? The new version will even tell you that those are cave and dish. Probably, probably, probably. It's not a new version. It's continuously. Yeah, yeah it is. Yes. And I mean you can even do motion detection in a video. I mean, I've built another demo where it's basically like a photo booth on my Mac where I look into the video stream and it basically takes snapshots every few milliseconds and renders green dots over my eyes, my mouth, a square over my face and takes a guess on my gender and my age and whether I wear glasses. So I take the glasses off and it'll say no glasses. It's almost real time. That's for another hack. Thank you.