 From the Fairmont Hotel in the heart of Silicon Valley, it's theCUBE, covering when IoT met AI, the intelligence of things. Brought to you by Western Digital. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in downtown San Jose at the Fairmont Hotel and an interesting little show called when IoT met AI, the intelligence of things. A lot of cool startups here along with some big companies we're really excited to have our next guest taking a little different angle. He's Scott Noplume. He is the co-founder and CEO of a company called Litbit. First off, Scott, welcome. Yeah, thank you very much. Absolutely, so for folks that aren't familiar, what is Litbit? What's kind of your core mission? Well, probably the simplest way to put it is, is in business we enable our users who have a lot of experience in a lot of different areas to take their expertise and experience, which may not be coding software or understanding or even being able to spell what an algorithm is in the data science perspective and being able to give them an easy interface so they can kind of create their own Siri or Alexa in AI, but in AI that's based on their own subject matter expertise that they can put to work in a lot of different ways. So there's often a lot of talk about kind of tribal knowledge and how does tribal knowledge get passed down so people know how to do things, whether it's with new employees or as you were talking about a little bit off camera, just remote locations or this or that. And there hasn't really been a great system to do that. So you're really attacking that not only with the documentation, but then making an AI actionable piece of software that can then drive machines and using IoT to do things. Is that correct? That's right. So if you created, say, an AI that I've been passionate about because I ran data centers for a lot of years is DAC. So DAC's an AI that has a lot of expertise in how to run a data center by kind of fueled and mentored by a lot of the experts in the industry. So how can you take DAC and put DAC to work in a lot of places and the people who need the best trained DAC aren't people who are building apps. They're people who have their area of subject matter expertise and we view these AI personas that can be put to work. It's kind of that apps of the future where people can subscribe to personas that are built directly by the experts, which is a pretty pure way to connect to AI's with the right people and then be able to get them and put them in. So there's kind of two steps of the process. How does the information get from the experts into your system? How does that training kind of happen? So where we spend a lot of attention is, a lot of people question, well an AI lives in this virtual logical world that's disconnected from the physical world. And I always question for people to close their eyes and imagine their favorite person that loves them in the world. And when they picture that person or hear that person's voice in their head, that's actually a very similar virtual world is what AI's working. It's not the physical world. And what connects us as people to the physical world are our senses, our sight, our hearing, our touch, our feeling. And what we've done is, is we've enabled using IoT sensors, the ability of combining those sensors with AI to turn sensors into senses, which then provide the ability for the AI to connect really meaningful ways to the physical world. And then the experts can teach the AI, this is what this looks like, this is what this sounds like, this is what it's supposed to feel like. This is, if it's greater than 80 degrees in my office, in an office location, it's hot. Really teaching the AI to be able to form thoughts based on a specific expertise and then be able to take the right actions to do the right things when those thoughts are caught. How do you deal with nuance? Cause I'm sure there's a lot of times where people, as you said, are sensing or smelling or something, but they don't even necessarily consciously know that that's an input into their decision process, even though it really is, they just haven't really thought of it as a discrete input. How do you kind of separate out all these kind of discrete inputs so you get a great model that represents kind of your best of breed technicians? Well, to try to answer the question, first of all, training is, the more training, the better. So the good way to think of the AI is unlike a lot of technologies that typically age and go out of life over time and AI continuously gets smarter, the more it's mentored by people, which would be supervised learning, and the more it can adjust and learn on its own, combined with real kind of day-to-day-date activity, combined with that supervised learning and unsupervised learning approach. So enabling it to continuously get better over time, but we've figured out some ways that it can produce some pretty meaningful results with a small amount of training, so yeah. Okay, and what are some of the applications that you're kind of your initial go-to-market? So we're a small startup and really what we've done is we've developed a platform that we really like to, our goal is for it to be very horizontal in nature, and then the applications of the AI personas can be very vertical or subject matter experts across different silos. So what we're doing is, is we're working with partners right now in different silos, developing AIs that have expertise in the oil and gas business, in the pharmaceutical space, in the data center space, in the corporate facilities managed space, and really making sure that people who aren't technologists in all of those space, whether you're a very specific scientist who's running a lab or a facilities guy in a corporate building can successfully make that kind of experiential connection between themselves and the AI, and put it to practical use, and then as we go, there's a lot of efforts that can be very specific to specific silos, whatever they may be. So those personas are actually roles of individuals, if you will, performing certain tasks within those verticals? Absolutely, and what we call them as coworkers, and the way things are designed is, is just like, one of the things that I think is really important in the AI world is that we approach everything from a human perspective because it's a big disruptive shift and there's a lot of concern over it. So if you get people to connect to it in a humanistic way, like coworker Viv works along with coworker Sophia, and Viv has this expertise, Sophia has this expertise, and has better improving ways to interface with people who have names that aren't a lot different than them, and have skill sets that aren't a lot different. It's just, when you look at the AIs, they don't mind working longer hours, let them work the weekends, so I can spend hours with my family, let them work the crazy shifts, and so things are different in that regard, but the relationship aspect of how the workplace works, trying not to disrupt that too much. Right, right, and then, so then on the consumption side, with the person co-worker that's working with the persona, how do they interact with it, how do they get the data out, and I guess even more importantly maybe, how does they get the new data back in to continue to train the model? So the biggest thing you have to focus on with a human and machine learning interface that doesn't require a programmer or a data science is that the language that the AI is taught in is human language, natural human language. So we developed a lot of natural human language files that are pretty neat because a human co-worker in California here could be interfacing in English to their co-worker, and at the same time, someone speaking Mandarin in Shanghai could be interfacing with the same co-worker speaking Mandarin, and thus you can get multilingual functionality right now to answer your question. People are doing it in a text-based scenario, but the future vision, I think when the industry timing is right, is we view that every one of the co-workers we're developing will have a very distinct unique fingerprint of a voice, so therefore when you're engaging with your co-worker using voice, you'll begin to recognize, oh that's DAX or that's Viv or that's Sophia based on their voice. So like many people were very pro, this is how we're communicating with voice and we believe the same things occur and a lot of that's in timing, but that's the direction where things are headed. Interesting, the whole voice aspect is just a whole another interesting thing in terms of what type of voice personality attributes are associated with voice, and that's probably gonna be a huge piece in terms of the adoption, in terms of having a true co-worker experience, if you will. Well, like fun, one of the things we haven't figured out and these are important questions and there's so many unknowns is we feel really confident that the AI persona should have a unique voice because then I know who I'm engaging with and I can connect by ear without them saying what their name is, but what does an AI persona look like? That's something where actually we don't know that and we explore different things and oh that looks scary or oh that doesn't make sense. Should it look like anything? Which has largely been the approach of what is an Alexa or a Siri look like, but as you continue to advance those engagements and particularly when augmented reality comes into play, through augmented reality, if you're able to look and say, oh a co-worker's working over there, there's some value in that, but what is it gonna look like? That's interesting and we don't know that. Yeah, hopefully they're in those things at the San Jose airport that are running around. Yeah, you're a classic robot. All right Scott, well it's a very interesting story. I look forward to watching you to grow and develop over time. Awesome, it's good to talk. Absolutely, all right, he's Scott Noplume. He's from LitVid, I'm Jeff Frick, you're watching theCUBE Wyrtman, IOT Med AI Intelligence of Things here in San Jose, California, we'll be right back after this short break. Thanks for watching.