 So we're here at Embedded World 2019 and hi, so who are you? Hi, my name's Graham Clarke. Welcome to the Renaissance Buffett Embedded World. Here we're introducing a wealth of new arm solutions for various applications. So first of all, let me introduce you to my colleague Stefan, who will talk a little bit about our RZD family. Hi. Yeah, hello. So my name is Stefan. I'm responsible for business development. RZD is the MPUs at Renaissance and we have very great technology. So this is the RZA to MPU basically. So this is covering basically class two of our EAI roadmap. Very importantly, so the... What is EAI roadmap? Yeah, so EAI stands for Embedded Artificial Intelligence. So this means basically AI on the node as opposed to AI in the cloud. So you're running all the node, this gives you a lot of advantages, so no delay time. So you have all the data on the node, very reliable and this is actually our differentiation because we also are very power efficient here. So you're doing AI on the microcontroller? Yeah, so on the MPU, on the node itself. So we don't have to send basically the data or the information in the cloud. So we're very fast to do that and our special differentiation is basically that we have the lowest power consumption doing embedded AI on the node itself. So this is right here, I look and it says... Is it a small nanometer or how do you have low power consumption? Yeah, very good question. So it is actually our differentiation. So we have the DRP on board. This is the dynamically reconfigurable processor. So we have a separate image here. And so basically what happens is the DRP gives you hardware performance, but you configure it in software. So you have software flexibility, you program in Zcode, but you get hardware performance. And what this allows you to do is basically program in software high-performance libraries for things like image preprocessing. So you can use the DRP in EAI nodes to do image preprocessing. So we have an example here. So in this version of the DRP we have six tiles. So these tiles sort of work like in a GPU. You can do parallel processing on image data. So in this case you have six tiles working on a median filter, six tiles working on Kenny, six tiles working on hysteresis. Each of these functions runs 20, 30, 40 times faster on the DRP tiles compared to software on the CPU. And the second advantage is you can reload very, very fast. So on the one hand the library function is very, very fast and you can reload fast. This gives you a very powerful image preprocessing functionality. So DRP is your invention or is it similar to FPGA or DSP or something? Yeah, it has a taste of FPGA, but it's not exactly an FPGA because if you have an FPGA and you bring in multiple applications, the FPGA quickly gets very, very expensive. So we can actually on the fly in nanoseconds reprogram the DRP and this makes it much more cost efficient also, but also more efficient terms of power. So it's a small thing on the SSD? It's very small and very powerful, very efficient and it's Renaissance proprietary technology. This was brought over from NEC. So this is proven in the market for a big Japanese customer already over many years. So this is a very powerful technology and this is actually the center technology of our full embedded AI roadmap. So you do neural networks on this? So at this point we are class two. This means we do image preprocessing on the DRP. We then hand over the preprocessed image to the Cortex-9 where we now run pre-trained AI models from TensorFlow Lite for instance. So at this point it's sort of a dual, a hybrid architecture or a hybrid strategy. So preprocessing on the DRP and then running the AI framework on the CPU. Going to class four which we will present next year. We then also have basically the AI framework running on a different version of the DRP. This is the next device in our roadmap and then also the AI framework runs 20, 30 times more efficient and quicker. Do you say the other next one will be different Cortex-A something else? Yeah, so different core, higher speed. So this will be class three. Today we have class two and here the DRP focuses on image preprocessing mainly. Is it here? Yeah, so I can show you the demo. So where is the chip? Yeah, so the chip is actually here. This is the chip. This is the RZ8M chip sitting on the MPU board. And here we have a MEP camera connected. The MPU board sits on a base board and here we have a normal TFT display. So what happens is we can see the six tiles of the DRP. In this case each DRP tile runs a different function, a different library. And so what this allows you to do is basically have a comparison between the CPU and the DRP. So in this case we run the CPU. So here we require on the Sobel filter 8000 microseconds. And we can switch this to DRP mode. And now we are much much faster. So we are at 900 microseconds. And so the CPU takes like 10 times longer for the same function. So basically as I said the DRP is very powerful. It can do parallel processing. And we can also switch through multiple examples here of functions. And this is a really nice example too. So what you can see here in CPU mode, it is not very smooth because the Sobel filter on the CPU is actually too slow here. So it's a little bit blurry, so it's not really fine lines. If you switch that same exact demo to DRP mode, the lines become very very smooth here. And so this means that the DRP can give you very power efficient image pre-processing on things like edge detection, corner detection, all the stuff that you need in embedded artificial intelligence frameworks. So this is a big launch, an important launch here at the embedded world? So the RZ-A2M was actually available before, but we are actually improving the ecosystem. So the ecosystem becomes ever more complete. We have more demos for things like iris detection, fingerprint. And in that sense we were launching the ecosystem so to speak. And RZ is a renaissance for arm? Yes, so RZ is the family of renaissance arm, MPUs. So renaissance was in the business of MPUs before. Those were called SH based on proprietary cores. But if you want to have a high-performance MPU with an arm core, that's the RZ family. So here we have actually four sub-families, RZ-A, which is sort of a mid-range application family. RZ-G is Hind Industrial, RZ-T for more motor control, and RZ-N for network. Did you show everything here at the booth? Yes, so we have RZ-G MPUs on the other side. I would hand over to Christoph Adam for this one. But also very interestingly we have actually five partners on the show, the exact demo. So many of our partners, so this means that our MPU and our solution is very successful and highly visible already on the show. And A2M stands for? So RZ stands for renaissance, C for zenith, so it means peak performance, so zenith is like a mountain. A means application, and then two is just the second generation of the family. So you get peak performance and an application processor, and now we have the second generation where we included the DRP. And the M stands for? M is a mid-range, so to speak, in that family because we may add more components to the family. And the target markets are something like this? The target markets, it depends on RZ-A1, the target market was mainly industrial HMI's in the mid-range. So we've been massively successful in the mid-range HMI and the white goods arena. We have like ovens and stuff that happens in the kitchen like kitchen accessory robots. That was the RZ-A1 family. So yeah, so for the last couple of years we've been very successful in that market. Pardon? Is it ARM? Yeah, yeah, yeah, yeah, Cortex-A9. So the RZ-A2M family goes more into that EAI space, so more image preprocessing, images over cameras, performing image preprocessing and then running TensorFlow Lite networks, AI on the CPU. So it's a hybrid AI approach currently, which is very versatile and very efficient actually. So AI is a big deal and everybody's talking about it, but it's a big deal that is coming to the embedded world, right? Right. So this is the way you bring it in? Yeah, so as said, of course everybody talks about artificial intelligence today. As said, our specific strength and positioning is embedded AI, so very much focused on the nodes and our very own strength is really to have very, very high AI performance at the lowest power consumption. So many of our competitors in the AI space, once they go to high performance on the AI side, they go like 10 watts, 50 watts, so very high power consumption. We really stay at low single digit, so 2 watts, 3 watts, even 1 watt, running all the AI frameworks. And so we are very high-performance in terms of AI performance, but we are very low on the power consumption. That combination is great for us and for our customers. And to have the AI in the edge is important because you have less latency or something? Yeah, exactly, less latency. You can keep the data on the node actually, so it's also relating to security topics. Bandwidth? Bandwidth, exactly. So many, many topics that give you an edge if you run it on the node.