 Hi, please introduce yourself. Hi, my name is Shai Kameen-Brown, I'm Director of Product Marketing for Low Power AGI with Synaptics. And what are you showing here? So we're showing various applications, primarily using our DBM 10L Low Power Audio AGI chip. So one of the key applications we're showing here is sound event detection. So I can play various sounds using this speaker over here, and you'll see various devices detecting it. All happening in extremely low power. So we're talking about between 1 and 3 or 4 milliwatts while detecting the sounds. And this is thanks to the low power capabilities of our chip, which is, as I said, a low-power audio AGI chip, which includes both the DSP and a neural network processor. So I see the chip is right there. Is that the one in the middle there? Yeah, this one over here is the chip. DBM 10L, this is one of the packages. QFN package. There's also a CSP package, which is much smaller. So, how does it work? Look at the goal here. So first thing I want to show is glass break detection. So I'm going to play a glass break on this phone, a glass break sound. It's going to go to this speaker over Bluetooth. The speaker will play out the sound. It'll be picked up by this device over here, the white device, that includes our chip in it, our low power AGI chip with a single microphone. This LED will turn on. And then the device also has a ULE long-range low-power wireless technology that will send a message to this device over here, this commercial Fritz box, which will then send a ULE message over to this device that will turn on. So let's see all of this happening in real time. So I'm playing the sound on here. It's picked up by this device. Detected glass break. And the light turns on, all happening wirelessly. Now the next thing I want to play is baby cry. So this device will play the baby cry sound. And this device, which is connected to one of our EVPs, will pick up the sound using my one microphone, writing a different neural network on the same chip and playback confirmation that the baby crying was detected. So I'm playing it now. Detected baby cry. So hopefully the sound is high enough and you could hear the device saying detected baby cry. It's like a trigger. Yeah, it's a trigger word. No, it's not a trigger word. It's a trigger sound. Sound. It could be any baby. It's not the same baby every time. Yes, of course. Any baby, just like any glass break, any gunshot, any, we detect toilet flush for some customers of ours. Next one is we have a really nice industrial application, which is really recently made by our partner Imagine Mob. And so you'll see, I'm going to play a welding sound. This is what a good weld sounds like. So it's playing out of this speaker, picked up by a microphone over here. And you can see on the screen that we're detecting this is a good weld. Now I can also play a bad welding sound picked up by the same device. And now you're going to see that it's supposed to say bad weld, maybe we can edit this later. Let me try again. So you see now it says bad weld. Okay, it says bad weld because this weld is not good. Okay, it's a difference when you play the sound from the speaker or actually happening, it's a different echoes and everything. Yeah, real time is actually where all those demos of course have some limitations to them. How do you program a scene? How do you program something you want to detect? So the way these things are constructed is using multiple samples. We're talking about hundreds, sometimes multiple thousands of samples that are provided. Either we collect them or the customer collects them. And then we have ways of training basically training an AI model. This is done offline and then loading that model onto the chip in real time to detect these specific sounds. Now a similar technology can be used for detecting keywords. So in this case, this is a commercial remote control from Hisense that's used with this premium TV from Hisense and this has the same chip, DBM10L with a couple of microphones, a keyword detection algorithm from Amazon and the microphones run our own noise suppression algorithm and beamforming. So I can say, Alexa, go home. So this is your standard Alexa experience on a remote control and what's special here is because of the low power capabilities of our chip, you can design devices at last, in this case maybe something like a year on a couple of AA batteries. Really? In the case of these devices. And it's listening all the time. Yes, it's listening all the time. In the case of these devices, this device for instance could last five years on a couple of CR123 batteries or something like three and a half years on three AA batteries. So it's a very, very long life which allows our customers to deploy these devices to places where installation is not easy. So you don't have to draw any power lines basically you just lick and stick. You know, put this on a wall somewhere, stick it on a wall, it lasts five years. The last thing I wanna show, this is biometric voice authentication. So in this case, it's the same chip, the DBM10L, running an application or software from a partner of ours called My Voice. And in this case, this is biometric voice authentication. So earlier I enrolled my voice in this and it's a very simple enrollment, just saying a certain command five or six times. And then when I'm talking, you can see if you turn the camera here, you can see when I'm talking it says authenticated. You see, and that's really what says that the device is authenticating my voice. Now if somebody else talks, it's not gonna authenticate them. You can enroll multiple people and you can use this for various security applications, parental controls and things like that. And this, just like all the other demos is done in sub five milliwatts while it's detecting. And so you can last very long time on some batteries. How does that work? How can you make it such low power? So the main trick, well, first of all, we've been designing chips for many years. So we know how to design them for very low power. We have tricks in our software for that as well. But the main thing here is the fact that we have a neural network, sorry, neural network accelerator, basically an NPU integrated into the chip. And so when you design a neural network model, software that is to run on that processor, on that core, it can run in extremely low power. That's how we do it. So AI, neural network thing works at two milli, what do you call it? One to five milliwatts, 21 and five milliwatts, yeah. There you have, that's very low power. That's extremely low power, correct? Yes, yes. That's contrary to what you do, for instance, chips like from NVIDIA and guys like that to do very high performance. For us, the trick here and the focus is low power. One thing I've always wanted to do with my hey Google and everything and is to give it my own name, to customize the trigger words. Is that something that's potentially possible with these kinds of devices? Yeah, it's not just potentially, it's totally possible. So I could say, hey John, if I wanted the device, the people who sell the device let me do that and they put your chip, in theory they could let me train any trigger word I want. Pretty much, yeah. I mean, Google and Amazon will never do that, but other companies will. But why wouldn't they? Well, that's a branding exercise. They want people to say Google 1000 times at home every week. Exactly, so they remember the Google brand, but also for quality. When you train a wake word extensively, like Google and Amazon have, and like we have in this case, then or Heismsted, you can get very high quality. These models where you train your own wake word, they're never as high quality, they're performance is never as good as something that's trained offline with a very, very long process. But the more you use it, the better it gets, so that's not how it works. That needs to be developed. It needs to be developed. Yeah, it might be good enough for... Yeah, it's good enough for some applications, yeah. It always depends on what distance you want to get, what noise you have in the environment, how important the performance is, those kinds of things need to be considered. And what do these, hey Google kind of devices have compared to what you do? You have a dedicated chip that does the wake word stuff, or they also do, or they do differently. So it depends. We have devices that do, hey Google detection, just like here I showed you Alexa detection. So we have those as well. What we do that's special compared to, for instance, your standard Amazon Echo Dot or Google Home is that we do this in very low power. You haven't seen much and you've seen actually very, very few Google, okay Google enabled and Alexa enabled devices running on battery power. Yeah, that's amazing to have such low power. Is this the lowest power in the world for this kind of application? It probably is, yes. Which it's always ready, always listening, you don't need to push the button to start it. Always listening, yes, no push to talk. Well, this device also has push to talk option, but the point is if you have to hold the device in order to activate it, you might as well press the buttons. But the idea is it's on the coffee table, I'm not touching it, I'm just talking it from far away. I don't have to do anything, just sit back on my sofa and talk. And it's good to have a closer to you. The TV is far away, it might be as good as- Yeah, that always helps with performance. What we have, like I said, we have two microphones here and we run our own algorithm, this very, very high performance algorithm for noise suppression, and we actually use beam for me here. And so, even when it's pretty far away from the user on the coffee table and not directly close proximity to the user, and even when the TV is playing full volume, it'll still pick up your wakeboard. Actually, this environment here in embedded world is extremely noisy. So that's why I'm talking so, so loud. It's very hard to even hear ourselves talk. And this device can still pick up my keyword. Nice. And maybe we can speak with your colleagues also. Yes, please go ahead. Yeah. Yeah, all right. All right, thanks a lot. Hi. Hi. So please introduce yourself. What are you showing here? So my name is François Béchon, I'm an FAE for Synaptics. And today we are going to present the TDI technology from Synaptics. So what we are doing is that we are selling TDI, so touch and display driver IC, to display manufacturer. So the purpose of this device is to drive the display, so to display the content, and also deal with the touch aspect. So we are selling one component to the display manufacturer. Here we are showing here what is currently sold in the EV car in the US, Lucid. It's showing the display technology. Here it's an active part, active part as well. If we have freeform display, so it's not only rectangular, we can deal with freeform both on both sides here and there. And it's curved as well. So meaning the silicone that we sell can also bend. So this is part of the thing that we can deal with. The silicone that you sell, so there's a chip, and there's displays, and there's the display and the touch. But everything is in one chip. Everything is in one chip, but we sell the silicone only to the display manufacturer. They put it on the glass, and it's done. We provide the solution for the display and the touch. What is this silicone you're talking about? This device, TDI, touch and display driver IC. And the chip is somewhere? The chip is glued on the display right here somewhere, just beneath there. And then there is a glass which is glued, which is assembled over it. Is that how every touch display is made? Right now, yes, because it's a new technology. The advantage of this technology, which is called In-Sell, is that it's cheaper. It also has better optic performances because you have less layer, so you have less reflectance from the light, so it's better. So there's been a lot of development over the last decade in capacitive touch. And is this the best, most advanced implementation of capacitive touch? Yes, it's capacitive touch. The most advantage of that is, also, it's naturally immune to humidity, to moisture, and really advantages. It's a lot of performance. Performance is very good with the satisfaction of the customer. The touch is happening efficiently, precisely. It's really precise. We have a lot of advantages within Synaptics, especially on the edge and regarding, well, it's really good solution. And on top of what we are showing here in this example, we are also promoting the knob on display. It's a rotary wheel. So then you have a wheel right onto the display. Yes, exactly. It's glued on the display. It's using the touch electrodes that are on the display. There is no, I would say, there is no additional component no needed, right? And the beauty of this solution is that it does not rely on the coupling of your end to the display, because it's using the touch that are already existing on the display. So basically, I can use chopsticks to do the rotation. It will detect it. Chopsticks? Yeah. I can use maybe something like my. But it's nice to have something physical, but still touch screen. Everything is integrated from one to the other. Yeah. And the beauty of this solution is that because of the Intel technology used, I can use, I'm using here's tissue. Which is not conductive. Yeah, it's not conductive. And I can still detect the rotation, right? So you see I'm rotating, and it's been detected. That's the beauty of the technology. Can you say if this chipset is an arm, or what is inside the architecture? It's our own processor, which is our own design. Yeah, it's our own design. Do you sell millions and millions of these? Yes, around the world. One of the market leaders? The competition is tough. But yeah, we are among the best seller in the world at the moment. Nice. And here what we are selling is presenting here also is the local dimming. So the intention of the local dimming is to. So this is the new device that we are promoting at the moment. It's the SP7800. The purpose is to do local dimming. So on top of the fact that it can drive the touch and display driver, I see it can also drive the backlight which is behind the display. So traditionally, when you have this kind of display, you have what we call the HLED. So this is what we call full dimming. It's only the LEDs are always lighting. The move to the local dimming, you have plenty of LEDs behind the display. But you only light on those who are needed. So for instance, on this picture here, you see the black content. So because it's black, you don't need. Basically, you don't need to light on this LED. You only need to light on the LEDs which are there. And this is what you can achieve here. By doing this, you have a much better contrast. Is this what they call mini LED backlight? Yeah, kind of. Because it's based on the fact that you drive LEDs, but they are not LEDs that are responsible for the pixels. It's backlight LEDs. So you run the backlights also? So we drive the backlight. So you drive the backlight, the touch, the display? The display. What's left? Audio. Audio. I saw something else. And what do we see here? So that's the... Here we see the demonstration of the local dimming. So here on this particular display, the local dimming is not on. So you see here on the side, there is black. But it's not real black. If I enable now the local dimming, you will see that the black will be a very dark black. You see? Here now it's enabled. I will disable again. It's black, but it's not black. Here it's black, black. And this is how we achieve a very high contrast ratio. It looks like OLED. Yeah, but the advantage... It's LCD. But it's LCD. And the real good advantage with respect to OLED is that it's TFT. It's a well-known technology. It's potentially... But also it's less expensive. And also in terms of aging, it's much more reliable with respect to the age compared to OLED. The factory is very big for the LCD. Yes. Maybe easier to make millions of millions of millions. Exactly, it's a known technology. And although you deal with multiple, much more LEDs to drive... You can do much brighter also. Potentially. Right, when you're in a car, you want to be able to see the screen. That's why the local dimming in the car has a lot of advantages. All right. It's basically HDR. It's 8-bit. It's 8-bit content. It's... We only deal with... We have a lot of image processing inside our device to improve the quality of the picture in the end and to drive it with the backlight. Cool. All right. Thanks a lot. You're welcome. With some of your colleagues. David? All right. Thanks a lot. Thank you. Hi. Hello. Hi. Please introduce yourself. Hello. My name is David Armour. I'm head of wireless connectivity sales in Europe for Synaptics. So what's the latest in wireless connectivity? Well, in addition to a large range of low-power, high-performance, Wi-Fi and Bluetooth solutions, we've just recently brought out a new family of products, which is the 4381 and 4382, one of which we've just recently won an award last couple of days. So here it says, Winner of... In the wireless connectivity category? That's correct. So what is a product? The product is a single chip solution that has a Wi-Fi radio. In this case, it's two Wi-Fi radios, plus a Bluetooth 5.2 core, and also support for 802, 15.4 thread and matter all in a single chip. One chip for all these things. Yep. That's special. It is. It's very unique in a number of ways. And the single chip allows you to protect your products with all those different protocols. In addition, the Wi-Fi radio can operate on the 2.4, 5 and 6 gigahertz bands. And this particular chip can run on two frequency bands at the same time. This enables lots of really interesting applications, particularly for heavy media applications. As I understand, Bluetooth and Wi-Fi is the same spectrum or something like that, and does matter also over there in the same one? Yes, it is. And part of the challenge of the radios is whether share a frequency band that there can be interference and potentially jamming between them. So if you're doing this on a single chip, then it's needed to have some really advanced coexistence mechanisms on the radios. So they work together well, and we've been doing this for a very long time now to get a really great performance while using the Wi-Fi and also streaming over Bluetooth for the audio. It also enables us to reduce the number of antennas needed. So this is on there, right there? Yes. Where is your chip? Somewhere inside it? The chip is on the school reference designers here, on the backside of it or something. So it's right there. And this is a demo that's running? It is. The demo we're running is demonstrating a feature that we call real simultaneous dual band. So the radio is connected to the 2.4 GHz network for infrastructure. So we're streaming content in from the internet. And then the other radio was setting up a local access point. And we're streaming that out on 6 GHz band to a number of small displays. From one to a number of displays. And what is the protocol for that? It's a feature that we have called RSDB. It's real simultaneous dual band. It's quite a unique feature because it's enabling us to coordinate two radios, working at the same time on the chip, coordinating the traffic on the Wi-Fi and the Bluetooth, plus the other protocols, and also enabling a reduced number of antennas. So it makes a very small solution and reduces the cost. And what do you need to have running these devices? Just Wi-Fi support. And that's it. These are just standard little tablets connected to the 5 or the 6 GHz network. You connect to the Wi-Fi network. And then the app supports your protocol. And boom, you receive the signal. You receive the video. So any Wi-Fi standard device can connect to our radio. Fully standard compliance, interoperable, and stream and connect the data. This app is special? No, it's within the Synaptic software that's running on the chip. All right. Can you describe a little bit what happens with the matter? Why are so many people talking about matter here in the embedded world? And why is it great to have everything and matter on the chip? I think the thing with Matter is, Matter is where the industries come together. There are 500, 600 companies already where it's enabling devices developed by different companies to be able to share data with each other in a way that they couldn't do before. So a lot of home automation systems could be siloed. For example, a garage door opener wouldn't talk to your alarm system, wouldn't talk to your heating system. Now with Matter, that data can be exchanged. And with our devices here, which support us, enables a small gateway to be made where you could aggregate that together within the smart home. All smart factory in a very, very efficient way. Some of the other standards have been people are talking about ZigBee and stuff like that because there was a challenge in range or how's matter range? Is, does it matter? Oh. Does it matter? Okay. You've been waiting all day to say that. That's all right. What is from a protocol point of view? ZigBee had a number of different profiles and it was challenging to get them to interoperate in some ways. With Matter, this is where the wider industries come together and to find a way for these devices to talk in a certified and interoperable way, which is a great step forward. It's similar way to the Wi-Fi Alliance. So anyone who's Wi-Fi certified, their product will talk to other devices which are Wi-Fi certified. All right, so then we look here. Does that mean when you show these very famous devices, they'll have some of your chips in there? Yes, the positive table here, the very latest generation of products that we've spoken about. These products are already shipping based upon earlier versions of our chips. Things from the DJ drone, there's Google Nest Door Bound, some tablets and there's a wide range of other devices, particularly in IoT, where the very low power consumption of the static radios is so important. All right, and it goes all the way down? No, there's more demos over there. What's happening there? So we mentioned a BLE. So in Bluetooth 5.2, that's in our latest chips, we're adding a support for the LE Audio Streaming. And the demo we have here is, the volume's down, it's noisy here, but we're taking content from a video and the video has multi-language soundtracks on it. I was streaming that audio content out to BLE speakers. I was doing it to two different sets of stereo speakers. To demonstrate, we could do multi-language streaming, which is a feature of multi-channel streaming of LE. It's so important. The great thing as well is LE is extremely low power compared to Bluetooth Classic. It's enabling battery life to be way, way longer than before. Yeah, they've been talking about the Bluetooth BLE, which is fantastic generation that just saves a lot of power. And you have it implemented in the best possible way. Yeah, our radios actually support Bluetooth dual mode. So they have the Classic Bluetooth with all the profiles which have been used for many, many years. And we also have the BLE radio in there as well. So we can support legacy and all the new applications on BLE. All right, cool. So this is probably, I guess, in millions of devices out there. Your existing... A very large number of devices, yes. All right, cool. Okay, thank you. Okay, thanks a lot. And let's go around here. Hi. Hi. Hi, sorry. Please introduce yourself. What do you should talk about? Hi. Hi, I'm Frneet Gantju. I run the audio business unit in Synaptic. So what we're showing here today is a new technology we call Resonate. Resonate is a new audio amplifier technology that drives a piezo transducer, which basically allows you to use the display or any surface as a speaker. So this allows you to eliminate traditional dynamic speakers and replace it with something that's actually already part of the product, like the display in a phone. You need to put something behind that vibrates the whole thing? Yeah, so it's a piezo electronic material, a transducer that's attached to the back of the display. And then there's a special amplifier technology that drives that transducer. And in this case, we're showing it attached to a phone panel, but it could be attached to the plastics or wood or any other type of surface. So has there been some phones already that do that, that instead of having a little speaker, they do it through the display? Yeah, there's been some phones that have done it, but they don't use a piezo material. They use an LRA motor. An LRA motor is typically used for haptics to vibrate a display to get that haptic feedback, but that LRA motor doesn't give you very good audio quality. So with the piezo, it allows you to get the same audio quality or actually better audio quality than you get with traditional speakers, but you get the benefits of a slimmer form factor. Now you can reduce the thickness because you eliminate the speaker. It's lower power consumption. You actually can get haptics feedback through the same materials so you can eliminate the LRA motor and you don't need holes in the industrial design to allow the audio to come out. So it's naturally waterproof and dustproof as well. There was like a phone a few years ago and they were talking about getting rid of the bezel and then they would not even need the speaker grill. And they would just do it through the phone, but it's not as high quality, but you're gonna match the quality of a real speaker? Great, exactly. All right, it was done in the past. You can eliminate the bezel, eliminate the speaker grill. But as I said, that was using a LRA motor technology so it didn't get the same audio quality. With this, we can match or actually exceed the... You match, you exceed, you say? How do you exceed it? Because it's a bigger surface? You can exceed it because of, yeah, it's a bigger surface that allows you to get a better frequency spectrum, a flatter frequency spectrum across more frequencies. And it allows you to get better perceived loudness because now the sound is coming directly at the user instead of going out of the side or out of the back. Does it feel like your phone is a 7.1? Does it feel like you get all the different areas of the phone making different sounds or it's just one sound source going out? It depends on how many transducers you use. So in this case, we're using two transducers, so it'll basically sound like a stereo speaker. Yeah, it's just a question of copyright. But for the music, that's why I said that. The LG has been shipping some TVs that have, or Sony and LG that have it inside the display and you bring it down to any device? Yeah, so LG and Sony have it in the TVs, as you mentioned. What our technology allows you to do is do it in smaller form factors, running off batteries because in this case, we're operating this at very low noise and very low power consumption. So it allows you to do it in devices like phones, tablets, PCs. Do you always need a glass? Do you always need a display? Or can you do it on like the plastic? Yeah, exactly, yeah. It's a good point. What material do you need? Yeah, any material that can vibrate, basically. So it can be glass, it can be plastic, it can be wood. Even you can imagine in a car, you can even attach it to a door panel or a dashboard. Metals? Metals also. If the metal can vibrate, it has to be flexible enough to vibrate. So not completely limited? Yeah, like an aluminum. Something like that? Thin aluminum, for example, yeah. Can you also mix and match? Like you have a little bit of glass, a little bit of metal, a little bit of plastics, and somehow they all resonate at the same time? Or you have to pick where you put your resonation? Yeah, it's theoretically possible to mix, but it depends on the properties of that particular material and how it resonates. Do you have other demos? So this is a sound portion of resonating. What this is showing is that this technology can also be used for haptic feedback. So it allows you to vibrate the screen so without the audio, so that you can get that, you know, that vibration and the touch haptics response. It allows you to do force sense as well so it can tell how hard a person is touching on the display. So if you want to do a force touch type application, if you're touching quickly or for a couple of seconds, you can get a different type of response. Is this, here it says also, the winner for sensors category. Is this mass production or is it in the future? It's sampling today, it'll be mass production later this year. And when you talk about haptics in the phones, they have vibrations and stuff when you use a keyboard. How does that compare the feeling of the haptics? Yeah, so with the piezo, so today they use a motor, an LRA motor, to get that haptics feedback. In this case, you use a piezo material, you can get a much sharper, crisper haptics feedback. So it allows you to get even a better experience than when you get in the phone today. Is it possible that you can even get the feeling of touching, for example, plastic or wood or glass like it vibrates in a different way? It gives you the feeling of touching something different. You mean if you're touching glass and it makes it feel like you're touching plastic, for example? Or wood or something. Does that make sense? I don't know, actually to be honest, I don't know, I'm not sure. I don't know if that's an application. Yeah, it's a good idea. We've never tried that. But you can shape the haptics feedback response to however you want. I mean, you're just basically playing out an audio file, vibrating at certain frequencies. The people that have problem seeing, they also need special haptics devices to potentially get a sense of the surroundings. Maybe that could be some of our applications too. If you have more granularity in haptic feedback. Yeah, you can have more granularity of both cases, either how hard you're touching or for how long you're touching, as well as the feedback, like you said, it could be a short pulse or a long pulse. And it's the same device that does the sound and the haptic feedback? Correct, yeah, same device that does. It's just different. Different frequency range that is playing back the audio out of. You get better bass or? No, you don't get better bass with Bezo. You get a better flatter frequency response across all the frequency band, but you won't get a louder bass necessarily. The bass has to deal with the size of the material itself. Could you also combine it with regular speakers? Yeah, absolutely, yeah. So if you want a better bass or a louder bass, then you could combine it with a subwoofer or a low frequency traditional dynamic speaker. Mix the two. And get kind of like a home fever experience from a little device. Yeah. Something like that. Yeah, exactly. 7.1 or 7.1 type of experience. Cool, all right, thanks a lot. Thank you. Thank you, cool. Let's go around right here. All right, hey, hey, hey, how are you? Good. Please introduce yourself. Hi, I'm Siddharth Chandrashaker. I'm Senior Director of Marketing for our Multimedia SOCs. We're showcasing here a bunch of solutions in SOCs. Actually, let's start over here. So, SOCs, is it the big one there? Yes, that's the evaluation kit. It's there? Yes. So you make the big chip? No, no, no, that's the heat sink. So it's actually something like this. There. That's the chip. So can you explain, what are we seeing here? So this is actually a system on module. So you'll have the chip plus the memory and a few things that you get a small system on module. This is the kind of form factor you fit it in. This is a kind of similar to banana pie. Sorry, raspberry pie. We call it banana pie. So it's based on 680. So it's fully functional. Is it an ARM? ARM, this is Quad A73, ARM Quad A73 with a GPU and a powerful NPU. So like nearly seven tops, NPU. That's powerful. That's very powerful and that's why you can see all this complex AI use cases that you can do. So here is doing full body pose estimation of all the people. That's a different use case where you can see based on depth. So this camera is just a single camera. It doesn't have a stereoscopic but based on kind of depth, it is choosing what's the foreground, what's the background and blurring it out. Nice. They're all sorts of kind of use cases. Has Synaptic's been doing SoC, on SoCs for a long time? Yes, so actually this dates back to the acquisition of Marvell's multimedia SoCs. They've been doing it since the early 2010s. We've continued doing it. We acquired it in 2017. So we're actually on our sixth generation now which is the 680 and the 640. That's what you're seeing. And our main key differentiator outside of the full video, the decode and display pipeline is the AI capabilities. And when you see this Ben and FI right there, is there a big community? If you can grab it, let's grab it right here. Is it this one? Is there a big community? So this has just been kind of launched. So it's still in the ramp phase but there is a different version of the banana pie which is more or less the same thing. This is actually a Baidu pie. Let me just come right here. So Baidu pie? Yes, Baidu which obviously Baidu is like, it's the same chip, the 680 but all of Baidu's AI models have been ported there. It's being distributed across all sorts of universities in China. Is it 96 boards or is this a form factor? It's just similar to a Raspberry Pi. Similar to a Raspberry Pi, exactly. So it's a different version of, it's a slightly different because it's designed by Baidu. And it's got the same ARM Cortex A73 and everything? That's correct. And the A55? It's the same basic, the A55 is the 640. On the big one, you also have small cores? No. Only big, only big. But the big is scalable, it's still not using too much power? Scalable, everything can be turned on and off individually. The other advantage of our AI engine is it's fully within the trusted execution environment. So it's fully secure. Nobody can ever touch the data or the model as well as you can operate models on any type of content. So think of in like a set of box world, an operator has launched a service where you can run object detection on any content whether it's broadcast, YouTube, Netflix, Amazon, doesn't matter because you're never breaking any of the DRM or security protocols. Here's asking, what's the package called? So what's the platform, what's it called? So this is called Banana Pi. Yeah? Our actual SOC is the VS680, which is the Quad A73. And the lower end device is VS640, which is Quad A55. And both have NPU engines. The NPU on the 680 is seven tops. NPU on the 640 is one and a half tops. Are you in many, like when I see this kind of device, it will be inside? Yep, this is a set of box. This is actually like a splitter, video splitter or video wall device. It takes six inputs and up to one 680 powering that. This is like an enterprise video conferencing device. It might even be 4K or... We are in smart displays. We are in appliances. So again, combination of using the display and video, but also the AI to make decisions about temperature, ingredients, and the camera and stuff like that. So it comes from the Marvell AMARDA... That is exactly right, that's an evolution. Part of that that goes in a different market that stays in Marvell? That's correct. We took the video side of it. Basically the multimedia processors come over to the standout. And how good are your video processors? You can do 8K, 4K? So the 680 can decode two 4K P60 simultaneously, or many more, lower resolution. The 640 can do one 4K P60 plus one 1080 P60. Do you have a lot of partners working on different projects with this? Yes, so both products are already in mass production, shipping in millions of units today. Millions? And what do we see here? Can you hold this? What are these different boards? Yes, these are just different boards with different connectivity options. And they're all using the same chip? All using the same chip. And there's a bigger one there? Correct. And this one is more for development? This is, I think, meant for more of the kind of videos, soundbar that you see, so supporting dual camera and even more interfaces going up. Do you support Linux? Linux in two flavors, both Ubuntu and Yocto, as well as Android AOSP, but also Android TV flavor because of the... What kind of customers do the Ubuntu and the Yocto? What do they use it for? Like the appliance or security. So, mainly in the industrial space, it's mainly Yocto. In the consumer space, it's primarily Android. Then in this kind of enterprise space, it's a mix. Some are Android, some are... You also have smaller... We also have video phones, IP phones, that instead of just audio, they also support video. Those are also Android-based, but the simpler ones can be Yocto. And here, when I look here, is it this one? This is just using as one of the inputs for this splitter, because this could take six inputs, so if I show this... So, you can see I can kind of show six inputs at one time. We've got six inputs coming in, two from that camera you see there. One is the AI demo running off the Baidu board, and three are just like set-to-boxes that have been plugged in there. What do we see here? Is that a Raspberry Pi? Yeah. So, it's not one of your boards? It's not one of ours. It's just using the power of this demo. So, lots of projects happening in the embedded world. Lots of advancements. Yes, and part of what we're doing here is that we've got some of these songs, but we're looking for more partners to build some other variants and extending our ecosystem of partners here. Cool. All right. Thanks a lot. Thank you very much. Thank you. All right. Okay. And go around right here. Two business models. All right. Let's finish the booth tour. Hi. Hello, hi. Please introduce yourself. So, I'm Elad Baram, and I'm running the marketing here in Synaptics for the low-power Vision AI. So, I see this stuff there. Are you in there? This stuff there is, it's an Edge SOC. It's named Katana. The defining factor here is power. So, it's a very, very low-power processor. It has an M33 arm core, high-phrase-free DSP, and a tiny NPU. And it's actually a microcontroller that is capable of both Vision AI and Audio AI. Vision and Audio AI. What Audio AI does it do? Well, it can do sound event detection and things like that. Here in this demo, it's actually showing all the vision capabilities. So, here it's showing human detection models. This is targeting actually like a smart human sensor. So, application like battery-operated device. So, today they are all activated by a motion sensor. But now we want to augment the capability and actually keep them as a battery-operated device but say, hey, this is not just a motion. We are seeing a face. We are seeing a person. We are seeing a car. We are seeing a dog. So, having something which is more... And it's very low power. This one is working on just a few tens of milliwatts when it's on the active and on microbutt when it's not... It sounds crazy. It's much lower than a security camera power consumption. A regular camera, you would take about one watt when it's working on active. So, this is like one... And using thousand times less? We are about one-tenth. 100 times less. Yeah, something like that. It's just a small part of a security camera. Yeah, you can use it as a small part of the security camera if you want the security camera to be always on. Or you just can make it as a standalone sensor, like a smart sensor, and use it as something that would sense the environment and then can send the metadata to the control panel saying, hey, we are seeing there was some motion here. We are seeing a person. You should take care of that. And not necessarily send the image out because sometimes privacy is a concern. So one of the advantages of having those analyses on the edge is that you don't need to send the image out in order to do the inference. So everything is happening on-device. All right. And this mass production is available? Yeah, this is available in mass production. It's been a while. We have a few designs going on. Yeah, it's pretty exciting. Cool. All right. And here in the other one, this is showing a phase detection. So this is a little bit different use case. So there is a new category, which we call an HPD, human presence detection. Sometimes it's actually user presence detection, which the idea here is not for the security market, but actually to make consumer electronics smarter and actually to understand whether there is a user or there is no user. So in example, in the laptop case, this is being used for the system to know if the user is engaged with the content or not engaged. If it's not engaged, it will dim the screen. It can save a lot of power. When you walk away, it will automatically turn off and we lock and so on. And we believe that a laptop is the only first instance of this type of context awareness. And now it's starting to go to other consumer devices that can have the same benefits. Cool. All right. Thanks a lot. Thank you. Thank you.