 Hello and welcome back to Fabulous Las Vegas. My name is Savannah Peterson, here with special coverage from CES 2024 for SiliconANGLE Media and theCUBE. Joining me today is a fascinating panel of brilliant guests in the space who all happen to be collaborating and all made exciting announcements here on the show floor. Rather than bore you by reading them off, we're gonna learn all about them in the upcoming discussion that's gonna touch a lot on how speed and inference are gonna change the real-time experience for users and not just users at the enterprise, but also consumers, because that's what this week is all about. Gentlemen, thank you so much for being here on this fabulous 70s couch. Are you enjoying the show? Are you exhausted? How let's start with you. You enjoying it and it is exhausting too, yes, when you walk 20 miles a day running from one end of the show to the other end. Yes, just having a little robot on the floor, help with that? Yeah, it does. Okay, tell us a little bit about Embodied and Moxie the Robot. Yeah, so Embodied, we are an AI robotics company developing AI companions to provide care to humans for human development in general. What Moxie is focused on is particularly child development. So Moxie is this AI companion robot which is like a believable lifelike character that can engage with you. Using JRT of AI, both large language models to converse with the child, but also has body language, reads the body language of the child, makes eye contact, understands your emotions, and then helps you develop social emotional skills as well as academic skills. I've been doing my research on you. I love that you're creating robots that care. Exactly, thank you. Yeah, absolutely. All right, you are seeing a lot of different applications. I can imagine your show has included a potpourri of things. I hope you tease and tell us about what's it been like for you in AI? I explained this week. Oh, it's actually perfect. I mean, so you can see some of the applications. Yeah, I love it. Love that. Yeah, compared to 30 years ago when I started AI and you see what is happening. I just want to pause there, 30 years ago. Yeah, when I was stalking you on LinkedIn, I actually had to keep hitting the tab to see how many different AI roles you had had. It's pretty impressive. You started when you was 10. Oh, thank you. Thank you, Paolo. Instead of music in the room, you were getting AI textbooks. Yeah, so what we were dreaming of 30 years ago is happening now. You can see all these applications of AI in the different areas from consumer to enterprise and so forth. So this is beautiful to see that. Yeah, and CS is perfect for this because you see not only one vertical but so many verticals at once. Everything's here and everyone's here. Exactly. That's why I love this show so much. How many CESs have you been to? 10, maybe. Oh, nice. Okay, so not all of those 30 years. No, I came from Germany. So I was, CBIT, yes, not all of them were nice here, but I compare CES with CBIT in Germany and it's actually more fun to see what's happening these days than my last CBIT I visited in Germany like 10 years ago or so, yeah, for sure. Oh, yeah, that's one of the things we talked about in our preview shows, the relevance of CES. And I think sometimes there's rumors of it not being that big of a deal. I don't know about you guys, but for me, this year feels as big, if not bigger than ever. It feels like the energy back up. The meetings are happening, the flip badges in the hallways, you know, secret things, lots of NDAs, it's been very, very exciting. Jonathan, you are, you're now just, you're on theCUBE as much as I am these days. How's the show been for you and Grock? It's been great. I mean, previous year, 2023, I think everyone started to realize AI was gonna become real. And that was interesting because people didn't know what to do with it. But in 2024, I think that's the year that it really comes from real, like not hypothetical, but real. What does that mean? So you've probably used one of these chatbots and you've interacted. How much do you use it every day? Never. Yeah, it's not engaging. No. It's slow. It's not like, I don't know, search. It's not like your news feed. If it was fast, it'd be engaging. So this year, things are gonna start to become interactive. And that's gonna make AI engaging. It's gonna increase the usage and that's going to make it real. So to you, engaging is real? Yes. If you are using it many times per day, then it is real. Yeah. And I would say to add to that, what we have experienced with AI to date is amazing potential, but the interface which sort of reminds us of the early days of computing where you had the flywheel or the hourglass turning and turning and turning and turning. You just kind of shivered out my spine. Remembering that. Yeah, and that does not make for great user experiences and with removing the latency so that this large language models and other generative models because we are gonna move towards multimodal systems which require even more compute, which will make it even slower if we continue down the current path. So making it respond immediately makes it a lot more real. For what we do at M body, it actually is super important because it's a AI character. It's in physical form and it's interacting with you. And imagine if you and I were having conversation and every time you said something, you saw zero reaction from me for even two seconds. It's odd, right? I mean, that does- I just thought about seven jokes I shouldn't make. Yeah. I would not feel as welcomed as I did entering this room today. Actually they use it in negotiation tactics like when you wanna make a really uncomfortable moment in a negotiation setting, you just say nothing. That void. Did you see, I just tried to hold you out there. Yeah. You did. That void in interaction makes people uncomfortable, right? And when we're going towards this interactive AI companions that are gonna help people with development, companionship for loneliness and all these things, the interface has to be a lot more fluid than where it is right now. How important is that working with younger folks, with children? Even more important, right? Because we human sort of adults, we may be a bit more patient or understanding of what's happening under the hood. Kids, when they interact with Moxie, they actually personify it. And to the point where if Moxie, for instance, this is part of the HRI as well, human-world interaction, like working in real-world environments is complex. So let's say Moxie's having a conversation with a child. Something happens in the background that causes Moxie to think the child moved over there and turns away from the child. And it's happened early days of our development. Children would get offended, literally, like, mom, Moxie's rude, I don't wanna talk to Moxie anymore. Just that subtle error or bug, right, caused that. And delays and inclinations and so on, do that today, are not patient enough for it. What an amazing user test, litmus test. I mean, kids always tell you the truth. It's one of the things I love about young people working with teens. And I can't imagine having a product with tailors to them specifically, but like, wow, it's just, what voice of customer there? That's outstanding research and so important. Speed is important in every single thing that you do. And when I was stalking you before this thrilling panel, I sensed from some of your quotes that as you saw all the different exciting things happening across verticals, you were nervous that speed was gonna be the killer it was gonna be impossible to do this without the right amount of power and speed. That's so true. So like Jonathan said earlier, I think in 2023, we had the year of demos and prototypes. Now it's time to prove that the demos and prototypes can actually do a thing. And speed is a critical component. I think speed is enabling even the prototyping and experimentation phase to be more efficient. You can run more experiments and so forth. So a person can actually or a company can actually do more experimentation, more experience, user experience tests, for example, to figure out what is best. And today, every iteration is still taking quite some time. It's gotten much easier than again, 30 years ago or 10 years ago or even two years ago, but it's by far not where it needs to be. It needs to be 10 times, it needs to be 20 times faster so that we can actually innovate much more. And speed is opening up also a new way of thinking about AI. So what some of our customers are using, they're not using one agent, one LNM at the same time, they're using multiple LNMs, negotiating with each other to serve a certain problem. It's the same like if you are starting, if you are starting, if you want to solve a problem, if you want to start a team to solve that problem, it's a team, five different people, five different capabilities, sitting together, negotiating and discussing the things with each other. So you need five specialists to sit down together and monitor each other and helping each other to do something better. So if we don't have the speed, we can't afford to have multiple of these running at the same time. Now with the speed there, the sky is the limit. You can do so much more than what you did before. And that's what we do. We have 43,000 models right now on the platform. We want to see them all on GROC so that it can actually iterate so much more faster than with the experimentation each of these small and large companies have on our platform. Wow, that's impressive. How does it make you feel hearing him say that, Jonathan? Oh, I'm excited. Shots. Well, not overly shocked, but we've been talking for a while and we're developing a great relationship. And I actually think one thing that's really important to note here is the three of us sitting next to each other here with you interviewing us, we're each representing a different portion of the stack. So we make models go fast. We provide compute capacity. AI Explain here provides the quality of the models, the selection of the models. And then with Embodiment, or embody, it's more about bringing that magic to the end user and that experience. And so you're starting to see the maturity of AI now, right? A year ago, I don't think any of us would have known how to find each other. I mean, it was so new as the Wild West. I don't even think we knew what our interfaces were to each other. We might have even met and thought we were competitors and that's like we don't do anything that's the same at all. So wait, so that's fantastic and news to me. When did, how long have you all been collaborating? Only a couple of months. Really? Like a life, couple of months, couple of three months. Yeah. Oh. With Embodiment, we had longer experience, like Trilaboration. They're all working together for a couple of years or so. But now that we have Gronkh underneath, this integration between the three platforms is very young. Yes. Yeah. Well, thanks for sharing a bit about it with us today. That's awesome. You won a CES Innovation Award. I'm actually one of the judges for the CES Innovation Awards. Oh, thank you. No, I'm assuming you won for Robotics or in AI. I was not that category. But I'm curious, what does that award mean for you and the team? I think it's great to get recognition, especially in a noisy forum like this where there are thousands and thousands of announcements happening. I think it's great for my team also, because they work hard every single day inventing new technologies, bringing it to the world. And it's always good to get feedback. As a matter of fact, even I said, the award is a great trophy. But the thing that I really see on my team when they're standing in front of the audience and seeing how people are reacting to Robotics is it's so invigorating and energizing for them, really. Like some of the youngest team members that literally are sitting at the computer all day long encoding, now they're seeing how people are reacting to the result of their work. Really fun and energizing. I'm sure when we go back home, everyone is going to be super pumped about the reactions I've seen here. And you've made it real, right? Exactly. We were talking to your head of PR. She was saying, people would ask, when can I have this? And you're like, no, we have it now. It's in the market already, yes. And so I think that's the theme for the year. The year 2024, it's going to be what do you have? Yeah. And I think that's really cool, because there's so many hypothetical applications. ES is full of MVPs and show cars, like you said, the prototype, and something that's actually shipping at scale, leveraging everything that you just described is actually, it's really compelling. And it's super exciting. This is my 12th CES. I started my career in Silicon Valley, New Zealand, throwing parties at CES. This is my show. I love coming here. I love seeing everything. It keeps me relevant for the rest of the year. And it's really cool where we're at such a fun place of the physical manifestation of the cool things happening in software and in hardware that makes it all happen. So I'm excited. So, Jonathan, I know that you had a really exciting announcement this week as well. You've opened up your API for developers and companies. When we're talking about making AI real, what do you think that's gonna do? What is that gonna enable these companies and folks to do like AI explain and like embody? Well, as we've announced, we're working with AI explain and embody. We're gonna be helping to power the Moxie robots interaction with children. And over the holiday break, we actually took Lama 270 billion, the famous open source model, and we put it on our hardware and we made it available to the world. We made number one on Hacker News. And we just kept humming. So as a result, we got a very large number of people reaching out to API at rock.com asking if they could get access. And we're letting people on a little bit at a time. But our goal is to make sure that this year, AI becomes interactive, it's real, it's usable, no more spinning beach ball. I think we're all here for that. I suspect there's, well, maybe you can tell us, what types of customers are the first folks knocking on the door? So typically it's someone who already has a product. For example, Moxie is real. AI explain is real. And you have this product, you've got users. And what you're trying to do is increase the engagement, increase the user count. We're not so good at working with people who haven't figured out what they're doing in generative AI right now, which is a large number of people. But if you have something, if it exists, and we've got a couple of other soon to come announcements of customers who are going to be using us, some of them quite well known and some of them quite big. So yeah, go for it. So I think, so I am excited because, I mean, since we have now GROC on the platform, we have 5,000 users developers which are building existing applications. And now they can actually with one click of a button, switch and compare, benchmark, how does it look with GROC? How does it look with other infrastructures? And then basically make a decision to switch or what they can do. This is actually like an enabler for many of our users on the platform. Are there any applications that you've seen on the platform that you're particularly excited about or very distinct trends you're seeing across those 5,000? So I would say user customer engagement with the users, anything which is chat about experiences, speech input, things where a user is talking to a machine or to a system is one thing. And that's gonna benefit quite a bit out of this engagement. For the same reason, Paula was saying that the system is reactive, response is fast. Response is fast. Other use cases are now that some companies have a lot of data and they basically need to go through it in a very quick manner. They can do it because now the speed, they don't need to run it and wait for days or weeks until things go, it's getting faster because we can have actually that infrastructure underneath to go through this, go through the data faster. So this is... And we were talking about what it is that AI explain does. And I think you had a really crisp way of explaining it at dinner the other night. Most people probably don't know how they return to you, but do you wanna give that? Sure, sure. I mean, at the end of the day, I mean, we have everyday new technologies coming in. It gets confusing for even us. I mean, I'm... For us. We're in this. We don't know what models to use. So that is basically the beauty is if you have it all in one platform, you can benchmark it. You can discover it first. No one knows what is there to begin with. So you can discover it. You can benchmark it. You can fine tune it. You can then deploy it all in the platform, side by side with all the technologies, all the suppliers there is. That's a great thing. So, and then it's easy to switch forward, backward to different systems and pick what is best for you. And the more we have, the better it is. So are you saying, I know your... Jonathan, you mentioned you're with folks. Things work best when people already have it there. Let's put it that way. Are you seeing people a little earlier stage in their journey? We have them too, yes. We have them too. So we have, I would say 30, 40% are experimenting and trying to build like new experiences with AI. But then we have also the people which are in this for a long time as well and building now what you call pipelines, apps, basically AI apps on our platform and building. And then they see that certain components are super fast and other components are slowing the whole pipeline down and they need to optimize. And if the more choices they have to benchmark against each other and pick the right one for them, the better it is. So if you're confused about AI and what models to use, you go to AI explain, they help you find the right model. They reduce the confusion for you. And you have thousands of developers paying you for this already and you're just taking off like a rock. How is it that you are able to make AI explains engine 10x more powerful? We just make it faster. More, yeah, higher performance, excuse me. So the best way to think about it is if you're building a million cars, you don't want to build it in your backyard. You want to build a factory. And if you need a million square feet of factory space to build the entire assembly line, but you only have 100,000, then you have to set up one 10th of the assembly line. Then you have to put a bunch of work product or partial vehicles through, collect them up. When they're all through, tear it down, set it up for the next 10th of the pipeline and so on. And that's the way a GPU works. They have external memory and you have to read from that external memory and you have to batch the jobs through to speed them up. And what we've done at Grock with the LPU, we've actually made it possible to just build one big factory. So when we're running something on chat.grock.com with a queue, what's happening is that's running on 640 of our chips. And that's where we get the speed. That's where we get the cost benefits. It's the largest inference deployment that's ever been done by about 20X, as far as we can tell. And it's growing. In fact, we now have about 2,500 accelerators in production and we're growing 15% capacity per week, every week, compounding. So by the end of the year, casual. 1,000X. It's gonna be an interesting year. It's gonna be a really interesting year. What's nice for you this year? Well, one of the things we are working on is adding academic development to Moxie. So because we wanna address the child holistically, we have been focused on social and emotional learning to date. Now we're expanding to academic development. We are adding memory to the conversations. By the way, something that no chat bot has right now is every time you start a conversation, it's like a new conversation. But with the AI companion, you're building a relationship that evolves over time, right? What the child likes doesn't like what they're good at, what they're not good at. What we talked about, what was their favorite toy, where they went on vacation. That has to evolve because Moxie is evolving with the child and that relationship is evolving. So we are adding memory, which is gonna require more compute power and all these things, which is why I'm excited about everything that Grock is doing because it's not just the LLMs. We're gonna have 1,000 more modules that are gonna require more compute power, right? And then we are also releasing a digital experience, which is gonna allow families that may not be able to afford a $700 robot, although that $700 robot typically usually ends up costing tens of thousands of dollars. We have made it affordable enough, but yet there's, of course, we want every child to have access to this kind of technology. So we are releasing a digital app as well. So you will be FaceTiming with your Moxie. So there's some exciting stuff coming up this year. How'd you start on the name Moxie? Oh well, long process, but after many choices we went to Moxie because Moxie stands for grit and perseverance. And that's a very important trade for any human being to have throughout their life. Yes, snaps to that. Totally, totally agree with that. We've talked about names on the show a bit, so I wanna know, I was curious about yours. I'm with The Grock Battle that you've been facing. I, when I was doing my research on Moxie and the stark accord for me personally, I am mega dyslexic. And I, Moxie seems to be going down a path of different types of learning models for different types of learners is that a part of your approach or was I just reading into that with my bias? You're absolutely right. It's not that easy to do, but one of the things we are releasing later this year also, we call it the recommendation engine, which based on the memory and everything you're learning about the child, then we have a different model that's constantly adapting the activities that are served to that child based on the need of that child. So it's sort of personalizing the experience to the needs of the child, because every child is different, every human is different. Even to the point where we have a companion parent app, in the parent app, the parent can actually provide some context that Moxie may not have. For instance, we are gonna be moving or how, we're gonna moving to a different stage or city and so those transitions are really difficult for children to process. And Moxie will have that as context. Now Moxie can bring up a conversation about that and then start using some strategies to figure out how to cope with that kind of situation. So they don't get anxiety about it and all these things. So definitely personalization and this is, I call it hyper personalization, like every child's experience is gonna be different. It's really cool. I was on a panel on the CES main stage on Tuesday talking about the future of AI and healthcare. And one of the things I brought up was collaborative care and holistic wellness across things. And I think it's so cool that you're creating, that you're leveraging these solutions to make something that can integrate with educators, with parents and with therapists. And I thought that was a really important differentiator to that pushes it past novelty. And I know that's something that you're also very passionate about is getting us past that novelty phase and to like your point to making it real. What are your biggest, ooh, this is a fun one I hadn't thought about. What are your biggest fears for this coming year? We talk a lot about excitement. Is there anything that could stand in the way of all the greatness we just talked about or yeah, be a bit of a roadblock? So I think instead of talking about fears that we have, I think it's better to talk about the fears that everyone else has that could become a roadblock. So whenever I'm in- That in itself could be a fear if we're really getting mad about it. Whenever I'm in a Lyft or an Uber, I try and talk and ask what have you been hearing about AI? And people generally have this sense of dread and concern. And it's so different from us who are in this because we're actually seeing a lot of the positives. I think the first time you had me on your show, I mentioned that large language models were going to provide subtlety and nuance and help people understand the world in a better way. Not look at things so black and white, but actually maybe there's more depth here. And if you think about it, by giving a child access to something like this, you can help them understand the world a little bit better, have a broader perspective rather than this sort of linear programming from TV where it's all about drama and just dragging people into things. Think about the bad behavior that your kids are learning because the goal of TV is to hold your attention. It's not to expand your understanding. By having this in a very engaging format as the Moxie robot, you're actually going to be helping your children understand the world better. And I think we got to educate people on what's possible with AI, not just the sort of numerism that interestingly enough is coming from some of these AI companies which I just don't understand. Yeah, I think there's also, there is a lot of stigma against AI and robotics and all these things. And Hollywood has not really helped us in that area when you see movies like Megan, Robotic Companion that ends up basically slurring entire family members and all that. But there are very powerful possibilities here. I think actually necessary solutions that we need here. As an example, we were talking about this earlier. If you have a child that may be on the spectrum, you probably will find out from the school the teacher will come to you and say, we feel you may want to figure out diagnosing your child. It's a devastating news for any parent to hear. You will have to work really fast to educate yourself, what do I do? And if you are a family that has access to resources, it will take you at least six months to get your child diagnosed. Yeah. At least six months, right? And after their diagnosis, it's going to take you another six months to find the right care providers if you can afford it. So there's a huge gap of providers to cater to the need there. The same thing applies in providing care for our elderly if they are living lonely and so on. Because they have lost their spouse, family members are living too far away. They have maybe developed some physical health condition that doesn't allow them to be able to drive anymore. Even if they have lived their home, they're entirely socially isolated. My mom went through that herself. How are we going to provide care for these people? You want to? Oh yeah, go for it, Dylan. You were telling me an amazing stat about divorces of families who, can you share this one? Yeah, I mean, to summarize the challenge for families who have kids on the spectrum is one of the stats that stands out is divorce and financial avoidance. Wow. Yeah. So, and we just don't have enough. Wow, that's so sad. We just don't have enough providers to cater to these families, right? The same thing applies in other neurodevelopmental challenges. And it applies also in neurodegenerative challenges, which is when you're in your aging phase of your life. And also, everyone in between, if you look at the data in mental health and all these things, especially after COVID, there is a lot of challenges with mental health. And we don't have the human power resources together. We're not trying, we don't know how to deal with this stuff. And in education, just to take another example, in education, the challenge is not so much about children not being able to learn, but it has to do with motivating them to want to learn. And if you have a child that falls behind in a classroom for whatever reason it may be, and there could be a billion different reasons, one of which could be situation in their household, 50% of families in the U.S. go through divorce. That's traumatic for children, and that may cause them to get off their path and they're falling behind. And when they fall behind, it's a vicious cycle they get into because now in a classroom, they feel embarrassed, they don't want to raise their hand to ask a question because they don't want to come across as not being the smartest kid in the class. With something like Moxie, it creates a safe and non-judgmental space for the child to ask as many questions as they want, go as deep as they want to explore any topic they want to find their curiosity and motivation and so on in a non-judgmental way with a character that's gonna have the patience in the world. And by the way, the smartest teacher you can ever find because it has all the knowledge, it's a subject expert matter in every subject you can imagine. But the way I understand you is also, I mean, we are augmenting basically the educator with the AI, at the end of the day, there is no way we can have an educator or with the child 24-cent, there's no way. So Moxie is filling that and enabling more. And that is true actually for many of the places where we use AI, we should not consider AI being necessarily a replacement for the human. Whereas, but there's some applications where AI is gonna replace a human. I tell you a story in my past. So when I started with eBay, the reason I started the AI team at eBay, we had the challenge that we had 80 million used listings every day and someone needed to translate that in real time. We had only 200 milliseconds. Translation is a huge thing. Exactly. And then what kind of army will you need to hire? Human that can do it in 200 milliseconds, translate the listing of a thing in 200 milliseconds, so there's no way to do it. When I started my path in AI, it was in translation, that was my focus in the 90s. Friends of mine in the translation industry were saying, you're gonna steal our jobs, you're gonna reduce our jobs, et cetera. 10 years forward, we actually, through machine translation, brought the world closer to each other. You can actually use translate and other translation capabilities to do the first gist of what you want to deal with and then basically have the human do the final, the final steps, so to say. So, I mean, we might need to see just how can we augment our powers with AI and then that's basically the power. That's a very good point, I 100% agree. All right, final question, because we could talk all day. What superpower do you hope AI gives you or humanity since that's what we were just kind of talking about? Jonathan, you're the only one making eye contact with me, I'm gonna go to you. Okay, my hope is that AI gives us the gift of more personalization in our interactions with other human beings. So, for example, I was talking to an education professional this week here at CES and they said, of course, teachers are worried that they're gonna lose their jobs. They're gonna lose the job that they have. They're gonna get a different job. Their job is instead of having a lecture or a teaching plan for 30 students, their job is going to be tailor that for each student and they're gonna have the ability to do that because General AI is gonna help them. So, you're gonna be able to do more, you're gonna be able to personalize more. Warms my heart to think about that, yeah, that's awesome. I am passionate about using AI for accelerating research. Whether it is in additional space and healthcare, stuff like that, so this is gonna be really good. I'm very, very passionate about this one. I mean, maybe we can live longer, maybe we can get everyone to a better level of education which is so important. I mean, this is things which I'm looking forward to in the next 10 years or so. I love it, beautifully optimistic, all right. I think Jonathan stole my line, so I... Sorry. I think, I mean, obviously I'm passionate about helping children become their ultimate best because I think actually that really can change the world. I know this is a typical headline in Silicon Valley. We are changing the world by way of doing search engines and that, but this is really changing the world by one child at a time because they can be much more balanced and positive citizens of this, our shared world and future. Yeah, these little, I'm a millennial, they call us digital natives. These AI natives will really have a chance to save the world for us. Jen, Jen. Yeah. The generative AI generation. Jen. John, that note, Jonathan is on. And fellow, thank you so much for joining me here on this fabulous special edition coverage from Las Vegas, Nevada, here at CES 2024. My name's Savannah Peterson and thank you for tuning in to theCUBE, the leading source for emerging tech news.