 From the Fairmont Hotel in the heart of Silicon Valley, it's theCUBE, covering, when IoT met AI, the intelligence of things. Brought to you by Western Digital. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in San Jose, California at the Fairmont Hotel at the when IoT met AI show. It's all about the intelligence of things. A lot of really interesting startups here. You know, we're still so early days in most of this technology. And facial recognition gets a lot of play. Iris recognition, gotta get rid of these stupid passwords. We're really excited to have our next guest. He's Modar Alawi. He is the CEO and founder of Iris. And it says here, Modar, that you guys are into face analytics and emotion recognition. First off, welcome. Thank you, thank you so much for having me. So face analytics, I'm a clear customer. I love going to clear the airport. I put my two fingers down. I think they have my iris. They have different things, but face, what's special about the face compared to some of these other kind of biometric options that people have? So yeah, we go beyond just the biometrics. We do pretty much the entire suite of face analytics, anything from eye openness, age, gender, emotion recognition, head pose, gaze estimation, et cetera, et cetera. So it is pretty much anything and everything that you can derive from the face, including non-verbal clues, yawning, head nod, head shake, et cetera. That is a huge range of things. So clearly, just the face recognition to know that I am me probably relatively straightforward. A couple anchor points is everything measure up and match the prior. But emotion, that's a whole different thing. Not only are there lots of different emotions, but the way I express my emotion might be different in the way you express the very same emotion, right? Everybody's a different smile. So how do you start to figure out the algorithms to sort through this? Right, so you're right. There are some nuances between cultures, ages, genders, ethnicities, and things like that. Generally, they've been universalized for the last three and a half decades by the scholars, the psychologists, et cetera. So what they actually have had a consensus on is that there are only seven or six universal emotions plus neutral, right? So joy, surprise, anger, disgust, fear, sadness, and neutral. Okay, and everything is some derivation of that. You can kind of put everything in a little bucket. So think of them as the seven universal colors or seven primary colors. And then everything else is a derivative of that, exactly. The other thing is emotions are hardwired into our brain. They happen in a 15th or 25th of a second, particularly micro expressions. And they can generally give up a lot of information as to whether a person has suppressed a certain emotion or not, or whether they are thinking about something negatively before they can respond positively, et cetera. Okay, so now you got the data, you know how I'm feeling. What are you doing with it? It must tie back to all types of different applications, I would say. That's right, there are a number of applications. Initially, when we created this, what we call enabling technology, we wanted to focus on two things. One is what type of application can have the biggest impacts, but also the quickest adoption in terms of volumes. Today we focus on driver monitoring AI as well as occupants monitoring AI. So we focus on autonomous and semi-autonomous vehicles. And the second application is social robotics. But in essence, if you think of a car, it's also another robot, except that social robotics are those potentially AI engines or even AI engines in a form of an actual robot that communicates with humans, therefore the word social. So I can see kind of a semi-autonomous vehicle or even a non-autonomous vehicle, you wanna know if I'm dozing off and I think there's some of those things have been around in a basic form for a little while. But what about in an autonomous vehicle is impacted by my emotion as really a passenger, right? Not necessarily a driver if it's a level five. That's right. So when we talk about autonomous vehicle, I think what you're referring to is level five autonomy where a vehicle does not actually have a steering wheel or gas pedal or anything like that. And we don't foresee that those will be on a road for at least another 10 years or more. The focus today is on level two, three, and four and that's semi-autonomy. Even for fully autonomous vehicles, you would see them come out with vision sensors or vision AI inside the vehicle so that these sensors get together with the software that can analyze everything that's happened inside, cater to the services towards what is gonna be like the ridership economy, right? Once the car drives itself autonomously, the focus shifts from the driver to the occupants. As a matter of fact, it's the occupants that would be riding in these vehicles or buying them or sharing them, not the driver. And therefore all of these services will revolve around who is inside the vehicle by age, gender, emotion, activity, et cetera. Interesting. All these things, age, gender, emotion, activity, what is the most important, do you think, in terms of your business and kind of where, as you said, you can have a big impact? We can group them into two categories. The first one is safety, obviously, eye-openness, head pose, blinking, yawning, and all of these things are of utmost importance, especially for focused on a driver at this point. But then there is a number of applications that relates to comfort and personalization. And so those could potentially take advantage of the emotions and the rest of the analytics that we provide. So where are you guys, Iris, as a company? Where do you have some installations, I assume, out there? Are you still early days? Where are you in terms of the development of the company? Oh, we have quite a mature product. What I can disclose is we have plans to go into mass production starting in 2018. Some plans for Q4 2017 have been pushed out. So we'll probably start seeing some of those in Q1, Q2, 2018. Okay. We made some announcements earlier this year at CES with Toyota and Honda, but then we'll be seeing some mass volume starting 2019 and beyond. Okay, and I assume your cloud-based solution? We do have that as well, but we are particularly a local processing solution. Oh, you are? It's an offline idea, so think of it as an edge computing type of solution. Okay, and then do you work with other people's sensors and existing systems? Are you more of a software component that plugs in or do you provide the whole system in terms of the, I assume, cameras to watch the people? So we're a software company only. Okay. However, our hardware processor camera agnostic, and of course everything, for everything to succeed, there will have to be some components of sensor fusion, and therefore we can work and do work with other sensor companies in order to provide higher confidence level of all the analytics that we provide. Pretty exciting. So you're not, so is it commercially available? Yeah, are you GA now or not quite yet? We will be commercially available. You'll start seeing it on the road or on the market sometime early next year. Sometime early next year. All right, well we'll look forward to it. Thank you so much. Very, very exciting time. Thank you. All right, he's BOTAR LAWI, and he's going to be paying attention to you and make sure you're paying attention to the road so you don't fall asleep and doze off and go to sleep. So I'm Jeff Rick, you're watching theCUBE, we're on IoT Met AI, the intelligence of things. San Jose, California. We'll be right back after this real break. Thanks for watching.