 And we're live. Hi, welcome to the embedded world 2021. And yes, you can see I'm just the stick of it. People around me, more or less, I don't know, but at least in front of me, I hope. So yeah, welcome. Maybe I've seen our last year video and that was just really on the embedded world. And today, we just from home. But nevertheless, we have some exciting news, of course. Last year, we talked about maybe an upcoming new chip from the from NXP, that would be the IDOT MX 8M plus. And actually, we have it because we are also other partners of NXP for a long time. So we have early access to it. And we already created the first summer. I'll just quickly show you a picture of it. Here you go. So here we have the first death kit using or just get on the other side, using the IDOT MX 8M plus. And the beauty of it is not only like a lot of features, of course, like NXP always brings a lot of features, but it has an NPU unit. So an NPU is like a neural processing unit. So it's dedicated for use with artificial intelligence and more specific for use with deep neural networks. So a really nice chip. It has like 2.3 tops. What does that mean? Maybe we can talk about this in a minute. So generally, it's exactly how fast the chip is running. There are other chips which are slightly faster, but maybe we can also talk about which one is better, or when to use which one. So there are many ways. But I really like this. I fell in love with this chip because it's an NXP did a tremendous job in integrating also the software part into it. So it's not only the hardware is also software. And I think just for development, it's important to have like a round pitch and not just the hardware super boosted, but also make it feasible. So this is like a really important new chip for the embedded world, you would say, because it's like really adding hardware AI in a long term support, maybe 10 years support kind of platform. Exactly. That is a huge difference. So there are of course some other chips out there ready. Maybe the most known would be the Google Coral board, which has actually four tops if I'm not mistaken. So that was the first Google Dev board came out in the beginning of 2019. And it is a tremendous is a really good board board, to be honest. But it was more like a desktop solution. And it was not really meant for industrial purposes. And if I'm not mistaken, so except of course, they are companies building chips dedicated, for example, for the automotive industry, just for Tesla or just for VW. Nvidia is doing big chips also. Yeah, Nvidia is mostly the Jetson series, which is also really nice desktop series, to be honest. But there's also a difference between desktop series and the one we are offering. Also here, most likely what happened is that people develop a start off with a Jetson board because it's really intuitive because it's just used the GPU. And if you work with TensorFlow, for example, back in the days for TensorFlow GPU, nowadays it's just TensorFlow 2. Anyhow, it can use the cooler drivers of Nvidia. And if you're familiar with that, then you just can kind of continue and you directly have an embedded board. But there's some disadvantages here that because very high power consumption and it's not really an industrial board. So it's actually better than my laptop, to be honest. So it's really nice to work on. But again, for an industrial application setting, it's not that super optimal. So a lot of people just start off with that. And then they choose and take a targeted industrial version. As for example, I have to this direction, like the Pollux kit. And this is a nice, is it quad core A53 plus the NPU, something like that. And then also it's on a nice kind of like nanometer that has a good low power consumption. I think I think 14 nanometers. Exactly. And it's exactly so that the depth was, of course, we put it on the carrier board, which has all the applications, whatever you need there. I should that know that by heart. But to be honest, I don't know. But anything you need for depth board is on there. And but of course, a fight, what we do at fight is normally, you can come as a customer say this, this, this kid is great. But to be honest, I only need I square zero something and I need a camera input or something like that. And, and but I need also the NPU chip and maybe I use on a ISP chip. And it's perfect. No problem. There's a question, how much clock speed on this board? Like maybe without having to answer it precisely, maybe it's it would be interesting to know what kind of performance are we talking about? Is it like, more than enough for like, what would you say 90% of embedded scenarios or what kind of performance is there? So, especially if you'd be talking about the AI chip, the NPU chip, it's 2.3 top. So to top 2.3, what would that be? That is what we tested, for example, would be using a ResNet truncated ResNet and then drawn with 17 milliseconds. So you get roughly 60 seconds, 60 frames per second, which is really nice. And mobile net runs even faster. So I think they do three milliseconds. That was tested by Nvidia. So I do believe them, they did the correct testing. And the point is, of course, for some of course, customer is not enough. We have customers saying we have like high speed cameras, we need much more. But say for general application, anything with vision application where you have a high input density, you can use this chip. If you are, is it fine? If your results are there, just within under a second, right? So of course, there's a difference. If you want to use, for example, autonomous driving, let's say autonomous driving, right? You need like a high resolution image, most likely 360 degrees, or several camera inputs. And you don't need only that object detection, you need like pixel exact observations, right? And that takes a lot more calculation power. And therefore would say like, definitely, this chip is maybe not in the range where you want to go. So of course, the automotive industry have their own chips like the mobile, etc. for Tesla. I don't I think they're very split again. Anyhow, so there are dedicated chip for the automotive industry. They have a lot of different targeted functions. But the 2.3 tops is if you have a general thing like a facial recognition thing, if you just want to look at a machine, or especially if you look at just machinery data, right? If you have a time series data stream, that is that is easy. That's not much input. So you have to think like the the recent 50 I was talking about this is 70 milliseconds has an input around 224 times 224 images, a pixel sorry. So this are like just data points, right? And you can imagine how much data points you can you can get the same amount of data points in by just hooking some sensors to machine, for example, for do for predictive maintenance, for example, etc. So depending where you go, as always, it's a question like what is your question? What is your target? And also how fast does it has to be? So for example, let's say, you have you want to have an image recognition of your face, and you want to detect is that person allowed to go into that room, for example, so you link a camera as a board to your door opening algorithm. And the question is how fast do you need a response? Right? I mean, as I said, it's under a second. So if the person just stands there, and has to wait, let's say with all is it's not only the recognition was also the algorithm behind it is that the right person do some other mass as well. So let's say you you need 70, 80, 90 milliseconds, also roughly a second. Is it too long? You have to enter that. So for your question, right? If you say, no, no, no, no, the door has to open within like 50 milliseconds. Okay, then you have to tune something either with the algorithm, or you have to take a different chip. In this case, mostly, you should have you can you can tune your algorithm as well, like faster input, maybe a lower resolution of the image, etc. So you can play with a lot of variables here. But that is a question. It's it's not always one fits all that you have a question, you get the solution then you have to check is it fast enough? Is it not fast enough? And yeah, for for time critical things, maybe need to go faster. As I said, autonomous driving, you need result in under probably a millisecond, right? And yeah, and then it's really time critical. So then, of course, I would say, either use a lot of those chips or don't use that chip at all use a dedicated chip. But for, let's say, I talked to a lot of customers in wide range of fields. And what I see in 90% of the cases, this chip is just perfectly suited for their solutions, right? And then they have, of course, customers, we say like, they tell me, sorry, we have high speed cameras, we have, I don't know, like, for example, I want one customer that highly rotating thing in that to to take barcodes and that and and that was like, I don't know, like 200 frames per second minimum or something like that. So yeah, and then of course, you go on another problem, because you need a dedicated camera, right? So normal cameras, if you take it just like an like a webcam or actually, the phytec also sells cameras. So I should know how many fs they can do. But let's say normally they do like 60 FPS, right? And then they're kind of capped, maybe some others probably phytec is much more FPS, of course, I should check that. But there's another problem that suits you, your camera, how much images can your camera deliver? That's another question. So you have an algorithm, there might be bottlenecks, you have a camera that's a bottleneck, and then you have to chip, there might be the bottleneck, and then also the memory that might be also bottleneck, how much how large can an image be to fit into your memory, for example? And so that's just not the chip size is dependent. There's a lot of things you have to think about. And in one case, you can make it work. In the other case, you can't make it work, right? And there are always different solutions to attack to tackle, right? If you say, for example, yeah, okay, this chip from nxp, there are some advantages, advantages, like long term availability, for example, right? We had this nice example, we had another vendor from Silicon Valley, we talked about was roughly a half a year ago. And it was really, really good chip, to be honest. And they, yeah, and then we asked for, okay, how long do you support this chip? And they said, we cannot guarantee over a year, we don't know if you still exist in a year, if Google buys us, if we are going bankrupt, they could just couldn't tell us. So we said, thanks a lot, it's a great chip. But our customers, they need us, they don't always need the latest and fastest product, they need a long lasting product, right? So think of the, the German barn, for example, at the German barn, they want to introduce, let's say, for example, in an algorithm, a deep learning algorithm, which checks the train and tracks the train engine, how long it's running, they want to implement it. And at the time being, let's say today, they develop that. So today, they need a chip that matches this requirement. They don't need a faster and better chip every year. They want this solution to be implemented now, just fitting for the purpose. And then it should be running for the next, I don't know, 10, 20, preferable 50 years without being touched, right? So that's a huge difference to the consumer market where the consumer market, you have a turnover in, I don't know, every, at least every two years, you need to get a new product out a new chip out to be faster, etc. And adapt it to that. But I mean, we work a lot with industrial customers. So mostly, that is not, that is not the requirement. The requirement is, we need a chip fitting solution right now. And this chip should last forever. And there's a question right here, you were talking a little bit about the self driving cars. And somebody here is asking question, can we use this in like rockets or satellites? What's the limit? Like, does it get very cold, very hot, this kind of board, the IMX is like designed for a lot of stuff, no? Yeah, the thing is, so it's the NXP and we produce electronics for industrial usage, right? So there's industrial temperature range as well. And I can tell you, I don't know how, let's say, we work together with one of the space agencies, we had worked with them. And actually, there is electronics or five tech in space, just to say, I think I'm not allowed to say anything more. But yes, we can make it work. So actually, in the background of the video we did, people can find it on YouTube. This is the video we did last year. And then in the background, there was like a rocket. I don't know if there was an iron five kind of rocket. It's not a SpaceX, but it's some kind of rocket. It's unfortunately, we're not working with SpaceX, but that so much I can say it's more European. That's too much. I'm telling too much. But so the thing is, yes, so we actually have five decades actually, a company which you can find in a lot of products, you will not see them upfront. But a lot of let's say coffee makers, I don't know. So it's mostly like one layer behind. So that's the brand name of the company. And we're mostly behind doing the electronics. And you can find us in in from a to that actually from from really low level, easy applicable electronics to really specialized electronics. And the key what five tech is the main key. So yes, we are developer and producer, which is by the way, both in Germany for for you for viewers who don't know. So we develop and produce here in Germany, production is in Germany. Exactly. It's beautiful city. You shouldn't come around if COVID is over, come around and take a look. It's a wonderful city. And exactly. So we have we're an international company. So we have, we have bases all over the world, but we do production and the main developer is in Germany. So what we're not doing what you mostly not doing is that somebody comes to us and say like, oh, I have this finished design, please do produce that for us because we're mostly more like, okay, you have a problem and you'll have kind of a solution and we help you to completely create the solution from a to that. And either you go to a website and say like, oh, this is a nice product already. I see some from finished solution from the from the shelf. And just and we just help you to find maybe the right solution. And you can ask our FIAs and for example, and okay, I have this problem, what would fit best, or we know it's like, okay, no of the solution from the shelf is fitting. So then we can create a solution together with you. Cork speed, temperature range, diameter size, any specialization like like as you say, we need to send this to space, right? Yeah, of course, we can we can work on this. I know magnetic field problems are there in the space, etc. So we can we can work with that. That's another question. Sorry, interrupt. How many GPIO pins does board have? And is that relevant kind of a question like in terms of what can people do with this? Like you have, you have what you call it, a big board that you can plug in with all the different ports, and then people can make custom kind of like end implementations, right? Yes, exactly. So and for example, I'm very sorry to listen to us because I'm, I'm mainly focused on the AR part. So I'm not too deep into the electronic, but let's say maybe if you just open the fight website. Let's have a look. I'll try. Oh, you can try. Yeah, we can if you want to try to follow you, I give you directions, you go to to fight.de. And then you can you can also try there's a translate to English. Just a second. True. But no, in the meantime, I can tell you, of course, if you need any any amount of Ios, we can we can do that for you. So this this kid is just like an, of course, a development board where you can you can try your solution, etc. That's everything like every option is available on this board. But most likely will say like, I don't need all options, right? I need I need just a few options of here here. So we can we can find this website. Yes, maybe I can change it to English. Yes, exactly. There should be translation to English. Yeah, exactly. And then you could go to some or you go to the IMX, I don't mix eight, the latest processor technology, you should be able to yes, you should be able to. Exactly, that should be the plus on there. That's already the plus if you go up up there or pointing to the screen. Sorry, you probably can't. Yeah, exactly. And that should be the I don't emit eight plus. It's at the top left, top left. See, if you go to the left. Yes, exactly. And there should be also more information. There's a lot of boards. So, you know, there's also technical details below there. It has the Cortex M7 together with the Cortex A53. Exactly. And then the NPU, right? And here's all the information. Exactly. You might want to have developing kits, accessories, a lot of stuff. And when they click on software. Yeah, you can of course get a lot of the latest BSP. And we have to do BSP, either one, if you talk about AI, so there because NXP, maybe I can't talk about NXP software a bit because that's actually what I really like about this because a lot of companies, some do it really good, some do it not that good. They create their own software to facilitate the models to you, AI models, if you've written a TensorFlow or whatever, to your NPU unit or however you want to call them, TPU, MPU, MMC, MMU. There are plenty names, mostly the same thing. And so at some point, you have to get your model onto the board, right? And the board should recognize hey, I have a TensorFlow model here, so I have to do something with that. So there are two approaches to this. And the, let's say, more classic approach is that the companies that write their own software, where you then have an SDK or an MDK to transfer your model in the right way. And what happens then is that each layer then transfers to either, for example, integer 8, most likely, and some other whatever needed to be that the board can detect the model. The problem here is that except for some company, like NVIDIA for example, they do a great job here. But NVIDIA is a company that are long term now in the business of AI. They have a lot of money, they have a lot of people dedicated to that. So actually their software is relatively up to date. And there's a problem here with other companies like, I don't want to say the name, so with other companies, other vendors, which create their own software, they start off as a nice software, which is just working, but they don't attend to it. They don't update it regularly. Maybe they have two or three people working that and they have no dip into the community or not much dip into the community. So they don't listen to them like, what do you need? And then suddenly there comes a new layout, maybe you really like, wow, this is the latest, the layer I want to use in my latest model or something like that. Most likely, but not be implemented in this MDK and most likely it will take quite a long time until they update this. I've seen actually MDKs which have haven't been updated for a long, really long time. So this is a problem. And also if you have a, if you have a problem, if something goes wrong with it, it's a conversion, there is no, there's no community to ask, right? There's only the company to ask. And then you have to be lucky that on the other side, there's either competent person, they have time for you, they don't have any other project at the moment, etc. So you're kind of really bound to this, this person at the other side. And to be honest, I don't like this approach. And NXP, what NXP did, they went a different road which I really like because they just took kind of the native TensorFlow Lite and implemented in an so-called EIQ library. And of course, in the EIQ library, they also, there's some adaptation. But this is based on the Google API, Google NN API. And the Google NN API is used actually to transfer model or to make, let's say, to create a pipeline on the model to a mobile phone, right, to an Android phone. And of course, that is always quite up to date. And I don't know the details, but most likely NXP changed a few lines there and adapted to their hardware. But that means that if the NN API is always up to date and TensorFlow Lite is always up to date, which has nothing to do actually with NXP, they only have to change just a bit probably just to make it being the latest version. And so they don't have to worry about like model conversion. If you have a problem with converting your model to TensorFlow Lite, this is a, for example, TensorFlow to TensorFlow Lite problem. So you can ask the community, like, what am I doing wrong? It doesn't work here. You don't have to rely on NXP here. And I think this is good because the one NXP is a great silicon producer, but they're not, let's say, the masters in AI, at least not at the moment yet, right? So they work on this. They just started the business. So you can't expect from them to be the gurus in AI and have the best people. Yeah. So and that is, I think that's really positive how they choose that. So they say, take all the problems, push them to the community, just take the original libraries and just transfer them. And they do the same with PyTorch, with ONNX etc. So they're really like this. To be honest, at the moment, it's still the alpha BSP. There's some problems with PyTorch, but I know that they're working on this and they have a really close communication with them. So in the next BSP update in kind of April, I think, they will be most likely everything will be running flawless. Tether floor is running, running perfect flawless. We tried this and it's really nice. And the thing is, I know, most likely a lot of embedded developers listening. So you might not know the struggle, but let's say for example, when I came to Phytek, I am a trained electric engineer, but to be honest, last time I did stuff with, I don't know, driver computation, et cetera, it was quite some time ago. So I was focusing on software development, on TensorFlow, on artificial intelligence, et cetera. So when I got the the NPU, and it just said, okay, just convert your model to TensorFlow Lite, just copy paste it on there and it's running and it actually worked directly flawlessly, perfect. So cut down my development time drastically, because I as a, let's say, non-native Yachto Linux speaker, let's say, for example, I was able to get my things running right away without a lot of development and also not the need of another person to transfer this. Of course, now when we're working on it to integrate this more into the Yachto layers, et cetera, then you need other people at all. But just for the first start for the development, it's just perfect. And that what I really, really like about this so you, if you have a data science department, or you just have like somebody who knows about machine learning, he does need to know about also embedded Yachto Linux, et cetera. He just has to know his stuff so he can directly implement his model. Perfect. And then if you want to adjust something on your side, also perfect because you just get a model file, you don't have to know anything about the model, you don't have to know anything about the training. Just take the model file, put it on the embedded device and done. And then you can also put it into your Yachto or your Docker file or however you want to run it. So it's really not implementation. Yeah. Last year, I did a video at your booth. Also, they was talking about the update and device management and the software lifecycle. There's another video about that. Yeah. And one question here is, which software we have to use in this board for autonomous driving car? That's a nice question. To be honest, I mean, I don't know what exactly the question. So if you, let's say, there's not, there's not just one software for autonomous driving car. So I'm not an expert, let's say, I'm not working at BMW and not working at Tesla, unfortunately, but I would fight it. It's just fine. It's just perfect happy. But I would say, so I don't know, of course, the detailed steps and what you need to make your car autonomous driving, but there are several steps to it, right? So you have to, of course, probably have a leader system or a camera system, etc. So you have to get some input and you have to transfer your input to be able to read by, for example, a neural network, then you have an neural network, most likely it's an object detection network, which segment your whole image into pixels and just determine what is seen. Is there a person, is there a car, an object, whatever that has to be done. Then you get an information from that. So you get an output from your model is out this model, output will be, I have a certain prediction percentage, like 0.9 something, that's a person. So then you have a threshold saying, oh, it's over 0.9. So I should start breaking or something like that, or you're just checking, okay, do I see road marks where I can follow, etc. So it's just not one software. It's most likely you probably start developing that in Python and just writing a program with a video on your PC and see, hey, can I, can I determine the road, the left and right lane, for example, on a PC and can I also determine if there are cars as there are other things on my screen in my video. And if you, if you have that and at a much later stage, you go to live to try out, you put maybe on an RC car and start driving or on a really slow driving car and see what is detected. But of course, you have your hand and steering wheel. So it's probably not like one, this software is like, click, please, autonomous driving. It's, it's of course a huge setup and development process. And on the way you will decide which software to run, most likely, back in the end, you will, you will code it in C, C++ to make it optimized and run really fast and smooth. But on the way to it, most likely you will code and start and in Python. So, but yeah, before you were, you were talking about this, this Google, you call it N, N, kind of like, like an app. But is that does that mean that you have to run Android on the board? Are you able to just take this and run on any kind of Linux or anything that you're running on the on the board? Exactly. So I don't know the details here, but I don't know how to, to which extent NXP adapters in an API, because it's running on an ARM board and not on an Android, but I'm quite certain that it's not a huge change, to be honest, because of course in in phones, you also are ARM processors, et cetera. So it will be interesting if you still need to update all the security updates and everything that goes into like a Linux board, right? With which, but then on one side, you get all the latest updates that Google would put in there, for example. Exactly. But yeah, exactly. So of course, you have to always be up to date. That's that's a good point. And you would get that with the NXP BSPs or the Phytek BSPs, which are based on the NXP BSPs, BSP, if you if you're looking to using the IQ library, for example, of course, we can also write for you a dedicated BSP from from from our site. And then you will get our security updates. But yes, you will get the updates. Of course, you can yourself also, if you're capable in writing your Linux, you can add your own layers and see what you can do yourself. If you get the sources, you can you can implement it yourself. Of course. Yeah. So so yesterday, you were in a panel. Yeah, that's great. About AI at the embedded world. How's this year going with the embedded world? What does the discussion you had in the panel, you say? Yeah, that was interesting. It's most more like the two ways, right? So where does it start? Where does it end? Where do we going? And actually, I think also a bit philosophical, right? So so so how can it's controls to there? I think if I don't know if the question came up, but I think before and he said, like, oh, maybe I'll ask you a bit about like the ethics of AI, etc. So these are really interesting points are just seeing the history. And we talked about that that it's actually relatively new, while still not new anymore, right? So Keras and it's a TensorFlow and Keras came out in 2015. There are both like Python libraries to write artificial neural networks. And they're just there for 2015. I mean, this is six years, right? So that's no long, right? That that's a tremendously short time. But and the first, let's say, for example, the first cat detector was created, I think in 2012. So that's not a long time ago, right? Oh, 2014. No, I'm not talking nonsense. But anyway, it's a place to Google that. But it's not it was not really recently. So I think it could be before actually TensorFlow. So it could be actually 2014 or even 2012. Anyhow, like what we when we think about artificial intelligence, right, we think about, of course, cat detection, etc. It's not that long around. So and then seeing what's happened from, for example, last year when we talked on the embedded world, there was just the idea of that there might be an I.MX8M plus with an NPU chip on there. But actually, at that time, you only had, I think you had to move videos. Joe Falcon was just developing another chip and you had a Google the Google Core board. If I'm not mistaken, probably let's have some other chips as well. But let's say like a handful of chips and main thing is that the most of these NPU chips, there were more of the NPU chips, but they were created for data centers. So the data centers like Azure and AWS and so that they're interested in using those things. Also, FPGAs, we talked about FPGAs a bit yesterday, these FPGAs to reduce their calculation performance and, of course, power consumption for the data centers, but to boost the calculation power with it. So boost calculation power, reduce consumption power, electric power. So and a lot of people like that the first wave of NPUs, they were all working for data science, for data centers. Sorry. So and then from the beginning of 19, like it's something eight to 17, it started as the first startups. And then 2018 and 19, we actually first releases also. Then suddenly we see NPU ships for developers, right, the first developer board. So you and me, we could both, let's say, start doing AI boosted on embedded AI, on an embedded platform. And that was actually quite novel. And from that moment from 2019 to now, we have, as I said, now over over 100 vendors, of course, not all are producing for developers. It's again, it's a split between data centers and development platforms. But we have much more competition here. And that just in roughly two years, right? And of course, probably not all will survive. And we have already seen some companies going down and some others starting up again and so on. But we see some consistency here and just the performance getting better and better every time. So and the thing is, so it's not stopping. It's just getting traction. And I'm really looking forward to what's coming the next years. The beginning, we only focus on CNN Networks, so only vision application. I have seen a lot of research on RNA networks and transformer networks. So the RNA networks are for time series analysis. And so far, I don't have seen any commercial chip out there supporting RNNs. If this correct me if I'm wrong. And also transformers, I haven't seen it yet. Transformers are really interesting for natural language processing. So that would be definitely a huge topic in the future because, I mean, we all want the bubble fish, right? I don't know if it's the same in English, but no panic. So you want a direct translation in your head. And therefore you want to embed a device which runs kind of autonomously and have real life translation. At the moment, there are some of them already online. What they I think most of them do, they get an input sent and they send it to somewhere like Alexa does that as well. They send it to somewhere into the cloud. They get the analysis and send back the results. And of course, it takes time, right? And it's always a question like, how long how much how long can the census be you record? How much data you can send up? Can you already pre-process something? But if you can put all this, the whole national speak processing, but also speech synthesization, so creation of speech into the embedded hardware, just there where you need it, you will be much faster. You reduce a huge bottleneck of sending stuff somewhere and sending it back. You don't know, so many things can happen while sending it to a cloud. I mean, the benefits, of course, but also server side, right? You have to see there definitely some bottlenecks and have to send it down. So if you can get rid of that, perfect. And therefore, I will definitely will see some action into, or let's say more research and also chips dedicated to transformer networks. And what I said yesterday, sorry to keep, yeah, you wanted to, I mean, your microphone might be off. Sorry, the panelists were one person from Microsoft, one from NVIDIA, right? So maybe an optimal solution could also be hybrid. You could have part on the chip, other part on the cloud somehow depending on what you need. That is done at the moment. So for example, you have like Alexa, like just like listening for keywords that you say, hey, Alexa, sorry that everybody has now the Alexa turning up, but like this keyword, for example, this is of course on board processing. So it's all, you can do it. Of course, you can do it already on an embedded hardware. The problem is you don't have any dedicated hardware at the moment to boost that. Either, of course, you could write an FPGA or something like that and put it on there. So Amazon, of course, has the capabilities. They have the manpower. They have the budget to do that. But let's say if you get that in a chip, if an embedded hardware chip and as easy accessible as, for example, the one we have used, we've used the Vivante 8000, I think in the NXP board. Well, let's see, it's based on the Vivante 8000. It's an updated version here. But if you have the same thing with just super easy access and just coding your model, just putting it there, it's recognized, it's boosted, that's a different story because then suddenly you can just, we as developers, we can create this. And if our name is not Amazon or Google, of course, they can do this. I mean, they have huge budget. They have huge manpower. They have brilliant people working there. So they will find a solution. But it's not always worth our time. Let's say we put much more time into it. We will lose a lot of other developments. Probably you have the other customers you have to work for something, just limited time and effort and money to do this. And as soon as we get a chip dedicated to that, so that reducing development time, that would be awesome. And you will see so much more interesting speech recognition, speech synthesization, et cetera. And I'm really looking forward to that for me. So in my comments, I have the, I think the Indian Elon Musk asking, he wants to make a private rocket company in India. Will you make the flight computer for his rocket? So what's the, just to have an idea, like what's the requirement to work with Fitech? Like the, is there a minimum order? How does it work to get this kind of support that might be needed to do more advanced stuff? Yeah, it's a good question. So we have, so we earn money, of course, with a serious production. So we are most interested, of course, in like 1,000 per year up to 100K, 200K, something like that per year, that would be perfect. But let's say for the development, it depends. Either, if you just wanna, you say, I need to want just one piece, and then we have to say, okay, then it should be a really interesting development because we have to put a lot of time into that and we'll not make money later with the series because there might be not a series, right? But anyhow, if you can reuse that purpose or if you have, if your company is like, okay, I'll do that for example for a rocket, but actually you could also use it in a different environment and this might sell much more pieces, also really interesting. And sometimes we're talking about also share development if you think like, oh, this is a really interesting topic and that is knowledge we need as well. Then we just do a shared development and we can of course share the cost, et cetera. So they're different solutions. And also maybe we will notice like, hey, a thing from the shelf, a phytic already has something that's suitable for you. Perfect, might be much more cheaper for you, right? Maybe you just need an adapted baseboard and say like, I don't need all this kit stuff, I just need, I don't know, hundreds of IOs, that's it. And yeah, so we will just create your baseboard with hundreds of IOs and that might be really cheap actually. And then we just help you to adapt whatever you need. And we also help with the software side. So of course, you know your problem best. So meaning if you, if you have problem X, you might know how to solve this problem but we probably know how to get you the thing onto the embedded hardware. So we can help you with the Yocto Linux part. We can help you to get the right libraries you need on the board, all this kind of stuff we can do. So most likely we'll not write the whole program for you but we can help. And the thing is we do have, we do have also partners of course, if you are new in artificial intelligence, but you know, I have a problem and I need to solve this. There are two ways. So either for example, what would do a lot that I give, I help you, I enable you to understand how artificial intelligence works. So I do give workshops and I've really good feedback. People said like, I got really nice feedback because we should have listened to you half a year before we would have saved so much time. It was really nice feedback actually. And things that were one way. So I help you and we get along and I'll tell you what steps to do. If you tell me like, ah no, we don't have time for that. So we have a problem and we need a solution kind of now. We do have also partners. We have at the moment three companies, startups and not any more startups spreader than Germany that will be in context, Numitri and Noroforge and they are actually capable of solving any AI problem. We'll throw their way. So you could just say like, I need an AI guided rocket system. And you presented to them so we could together present it to our partners and we'll find the right partner for you. And then you will be also set, right? Or just for a part of your problem. He's just like, I don't know all the stuff, but the AI part, I don't know how to solve, detecting some objects and I don't know, this one easy. So either we can help you with that or we get our partners into that and we just do it together. So there's many. Your partner company can do the like the whole rocket flight software or the other question is like a self-driving car or something like, if it's more like specific need to do something, it's possible. Exactly. So for small or large parts, we can either, we can either help, we also got knowledge in this kind of case, but let's say our data science department is smaller. So I mostly help customers to enable to understand the problem and I tell them what to do. But if you say, we don't have time to do it ourselves, but actually I know that a lot of companies have very good engineers and they're kind of just afraid to take the step into the unknown like, oh AI, it sounds difficult. But to be honest, if you're knowledgeable and if you're a good engineer, and I can definitely teach you how to work with artificial intelligence, but of course it will take some time. And if you say we don't have this time, so maybe take both like, okay, our engineers, we get at the workshop, but on the other hand, or we just take one of your partners to just boost the process, you guys get on the road a bit faster. So that might be a solution as well, right? So it's really depending on how fast you wanna be and how sophisticated your problem is as well. So they're easy problems and they're most sophisticated ones, yeah. So one thing I'm wondering, because the whole concept of AI in the embedded world as you were saying a little bit like modern or new kind of thing 2019, kind of the trend has been growing a lot. So what would you say over this past strange year has been like, do you notice like a real, even more growth in this hype? Or how do you say like, is it getting even bigger, right, faster? Yeah, you say hype, I'm quite certain it's not a hype anymore. The thing is, I think that's it. There was a from Boston Consulting or somewhere they put analysis out and apparently embedded AI, only embedded AI is growing 18% to 20% per year at the moment. So that is some solid numbers. So the general AI, you cannot think it away anymore. It's, you constantly use it. If you use your phone, if you have anything, any service, Facebook, whatever, you are monitored, you're analyzed by AI, you ever select the whole commercial, everything you get presented is recommended by AI. So this is done. And the thing is, if you are in development, if you do the step to AI, you will notice how much effort is taken away because the thing is at the beginning, I remember, people would talk about cheating. Because it was kind of like, before you have to write all the if, else and the case, if that happens and that, but wait a second, I have this pro, and sometimes really complicated, right? To have like an image detection system. Oh my goodness, do that by hand. This can take ages, right? And suddenly there was something out there who does it in a snap, right? And does really well, of course, it took time to develop, et cetera. It's not that's by the snap, but anyhow, nowadays it's very easy and very stable. So AI is definitely here to stay, that's for sure. So, and then the next step, the thing is, so now, at least the AI community, they, we really understand AI well, right? So on the x86 base, on your desktop PC, a lot of things can be done. So the natural next step is like, okay, now we understand our baseline. So what is next, what's the next frontier? It's like, oh wait a second, let's get this really complicated and sophisticated algorithm, let's try to get that on a better device. And that's kind of the next evolutionary step just to get this massive, heavy network, smaller, slimmer to get it on a device and then run it really fast if it's limited computation capabilities. And that is like the next goal, let's say, one of the next goals, of course, several roads, right? But this is one of the next goals. And so yeah, it's kind of a natural development, to be honest. The thing is, do you notice in the last years, there are other ideas also coming up. Suddenly, people talk about power consumption of training in network, right? At the beginning, nobody thought about how much power do I need? How much power does a data center actually generate? And nowadays, we know like, wait a second, this is a lot of energy, like thinking about the ecological footprint of AI. It kind of two years back, nobody was thinking about this but because there were still problems to think about, problems to solve, like how do I got this object detection more accurate? How do I get it more stable? We have that now, of course, there's still a lot of unsolved problems, but we are quite stable in this situation. So now we're looking to other things, like we are thinking further, how can we reduce a carbon footprint of AI, right? How can we get a model small and slim to get it on a better device? So this is a completely different thinking now, advanced thinking in the direction of AI. So it's just getting further. So I'm really looking forward to the next few years of development. And as I said, also in the panel, the humans tend to think linear, right? So you can think like, okay, what happens the next two years? What will happen the next two years? The last two years, next. So, and this is of course wrong. It's not a linear trend, it's something, I don't know, exponential, not maybe not exponential, but at least quadratic or some other nonlinear function. So it's really difficult to say how fast and where it's going. Yeah, I have to say. It's nice. It's already, the video is already longer than our video from last year. People can check it out. If you wanna get that kind of introduction on the show floor, this is like a different kind of style, right? We are doing, and there is one question right here asking about, are you doing some videos? Are you helping teaching people or is there something happening in this strange time remotely? You know, like, or are you doing content like this? Interesting. Yeah, sure. Yeah, I do have a medium website. It's mostly like two words data science. So if you check out, yeah, oh, wait a second. Jan Weert, medium. There's no many articles out there just writing one right now actually just before we start this interview was typing. Yes. Can I put, I don't know where to put that. I only have seven followers. So I'm looking forward to, let's see. Yes, so let's see here it should be exactly. You can see that. So to my medium website, I just posted some articles on how to do certain stuff. But I also, I saw just somebody was interested in how to get your software running on a Windows PC, right? So yeah, mostly of course, just as a tip for you try to learn Linux. That's, it's not that difficult actually. So to be honest, I started also not that long ago to work in Linux. But if you purchase AI, it's so much more simpler. Everything is dedicated to Linux. And of course, it will be possible also with Windows. I mean, you can, okay, maybe I can just hint you. So first go in and install Anaconda. Look for Anaconda. And this is kind of an environment setting. So it's kind of an API for all this kind of stuff. And there you can install in an environment. You can install Python. And then in there, you can create environments. Let's say you call it my AI environment. And then you can install this PIP over Skonda. You can install all the libraries you need. And then you just can work on your Windows PC. And for example, for the board, for the Pollux Board, PPP, I just show it again because it's so nice. On the Pollux Board, you can, you create this model and then you just have to transfer it to TensorFlow Lite. And on my Medium page, you actually find a really detailed explanation. And people like that a lot, actually, how to do that. It's very simple. And as soon as you have that transferred, just a file. Then you can just copy that to your embedded hardware and it's running there. So you can train your model on Windows, or you can train it, for example, also in Azure and AWS and Kaggle, whatever platform you prefer. And if you have that in the model, it's just a file. It's a model.tivlite file. Or in this model, you just copy there. You have to load it in a specific way. In the Medium page, there are other articles all the way for example. I describe a celebrity face match demo. It's a fun thing where you can see which we celebrate you are. And yet there you see actually how the model is loaded, for example. And you just can kind of kind of strip kitty, copy paste that, easy going. So you can start learning that. So yeah, so there's a difference between the embedded thing and the end windows. It's actually completely separate. One of the great things about the NXP solutions and some of the other solutions is, the idea is a little bit, also the STM32 MP1, for example, is that it should also be great, kind of like value for money, right? These boards are, when people make a big production, it could potentially be very affordable kind of solution and not use too much power and have long-term support. That's a little bit the idea, no? I mean, this is, we do produce for professionals, let's say, as I said before, our boards are in space. Our boards are in coffee machines you find in huge coffee vendors. Our boards, I don't know. Look around, it's so many companies you might not expect to fight against in there. So we do for mass production, we do highly professional embedded hardware and with industrial standards, yes, also security standards, that's also quite important, actually, right? If you think about AI, we can use special TPM chips, for example, we can burn your keys on there so that actually nobody else can access that. And as I said before, it's development and production in Germany. So to be honest, there is an advantage to that. First of all, yes, we do different processes than they do in many Chinese factories, for example, so our hardware is actually lasting longer, that's just the fact. And, but not only that, so it's also the close communication. So it's, of course, be produced in Germany, so maybe cost of labor is a bit more higher than in China, a bit more in quotation marks, but it's not only about that. So you have very long delivery times, very difficult communications probably. If you don't have a Chinese native speaker might be quite difficult to communicate with a Chinese silicon vendor. And this product starts at 44 Euro, it says. Yes, exactly. It's a MX 8M. Exactly, yeah, for example. And it depending, of course, also if, depending on how many you get, right? If you get a series with, I don't know, let's say a thousand per year, I think the price is also variable again. So please contact us on that. You don't have to buy, you just cannot ask how much it is. So let me just give you an answer, don't worry. So it really depends on how much you need and if you need a base port to that and all this kind of stuff. It prices vary as well, right? Cool. But yeah, the thing is a lot of customers are really like the connection with us because whenever we have customers, I think we have a 98 connection rate, or how it's called. So if people come to us and they talk to us, they stick with us because they'll notice like, it's easy communication, it's fast communication. We can do rapid prototyping. They get it, I don't know, after a week they get their first demo or something, not demo, they have first build up or I don't wanna say too much, a week, maybe two weeks, but at least much faster than in China. Send it quickly back, have a quick chat and say, ah, we need it slightly different. It's much easier to communicate and the development time is drastically shorter than if you, for example, work as a Chinese vendor. So there are benefits to it. So just give us a chat, chat us up and we can have an informal chat about whatever you need, your problems. I can get other experts to the discussion and we can just discuss about it. There's no obligation to buy anything first, of course. Yeah, so just hit us up. Your mic is off. Sorry, I mute myself, yeah. So as I remember, you might also partner with some Chinese partners that could make high quantity if somebody needs that, even lower cost. Yes, yeah, of course, that happens as well. So we have, for example, we have an office also in China and of course they are closely also connected to Chinese production facilities. That's possible as well, yeah, of course. Cool, so thanks a lot. Thanks for doing this better world interview. Yes, that was great. Digital online. Yeah, right, it was a pleasure. It was really pleasure. From the Phytec headquarters. Yeah, exactly, nice. Wonderful, yeah, see you around. Yeah, I hope to see you next time face to face. That would be great and might be a bit more loud in the background, a bit more hectic, but yeah, also enjoyable, right? Cool, thanks a lot. And thanks to people watching live and asking comments in the chat and people watching it on demand. It's on demand now. Perfect, all right. You can check me out on LinkedIn or find, you get connected to me, no problem. Yeah, I'll put the links, all the links you can give me. I can put them under the video. Wonderful.