 Hi, my name is Jan Werth, Dr. Jan Werth. I actually started as a carpenter and I made it to become an electrical engineer. I studied in Germany and then moved to the Netherlands to get my promotion, my PhD in electrical engineering. And during that time I worked a lot with first signal processing, that's where I started from, but then it moved slowly towards machine learning and then deep learning. And it's the whole thing for Philip's research in Eindhoven. And I worked mainly with medical data, I'd say pre-tum infant data was mostly my topic, but any other data as well. And I learned the core of data is data. And if you're not signal processing, you can dive into machine learning and deep learning. And it's a very, very strong tool. If you understand machine learning, deep learning, you can achieve so much more than with basic, classic signal processing. So interesting times we are heading up to. Because the whole industry is focusing a lot on AI, right? Everybody's talking about AI. What's happening with AI in the embedded world? Exactly, that's interesting. There's actually one point why I actually moved to Phytek, because there are many companies out there that do software, right? If you do some facial recognition or whatever, you can find hundreds of companies doing that. But the thing is, beginning 2019, around that time, the edge processing came into play. And the idea of edge processing is that you not use your cloud computers to do the calculations. You have to separate that. We can go in detail if you want to. But let's say the finished algorithm that can now run on embedded systems. And that actually started in 2019 to become a bit more boosted, because Google and Microsoft as they brought AI accelerator chips on the market. So it was possible before and of course with that spirit of the idea that hey, why not put our algorithms on the embedded hardware and then we can use it where the data is created. And then we don't have to send a bunch of video streams, whatever, to somewhere. So that was a question of bandwidth and also security. If you send something, you can interfere it, right? So this kind of question came up at the beginning of 2019 and I thought, okay, this is interesting. I want to work at a company which produces such hardware so that I can bring in the understanding of AI and they bring in the understanding of embedded hardware and we can synergize together and say like, okay, let's work on that topic together, right? I think so far it's actually great because I can also teach our customers what AI is actually all about. There's a lot of misconceptions, what can be done, what can't be done. A lot of people talk about it, it's inflated, et cetera. I'm not that person, I would say it's not true anyhow, but the expectancies of AI sometimes are a bit off, let's say, and I like to talk to the customers and talk, okay, this is your problem, that we can solve easily with AI, this is a bit more problematic. It's not a complete self-learning system, for example. This is always an idea coming up like, oh, I just put my camera there and it learns by itself, for example. Not the case in you can create a system who can do it, but anyhow in many cases not so. A lot of things I like to clear up, I like to give workshops and I like to enable our customers to do themselves. So of course I could go to them and say, this is how you do it and I'll do it for you, but I don't know, we couldn't reach too many people and I want to boost, let's say, the German market and the idea of AI in general. So if I can tell a customer, it's actually quite simple. You're an engineer, you understand signal processing. Now I teach you what to do with your signals and which steps, et cetera. And then let's say, yeah, the whole company, everything's evaluated to the next step, to the next level. I can do it on their own and I'll just help them on the way and say, okay, this is the guidance. And of course, we as fighters, we hope that if they now have a new tool, a really strong and powerful tool to solve a problem that couldn't solve beforehand, they need new hardware, right? And then it's like, by the way, we also do hardware and then we can combine that and say, you know, you now have your algorithms. You actually want to have it run on your embedded system. So yeah, let's work together. I can help you how to put it actually on the embedded hardware. So as far as I understand, and you're mentioning 2019, all this is very new, even though it's been possible before. Yeah, exactly. As far as I understand, this requires new SOCs with AI areas on them. And this is kind of a new stuff. Exactly. But in the embedded world, a lot of times it's like a serious business and sometimes the chip is not the latest. Interesting that you say that. So first of all, just to put it that way, you don't need dedicated AI hardware. There's also a misconception. A lot of people think that you have to have it. You only need it if you really go to the milliseconds and you have a specific problem. The AI chips you can get at the moment, they're only focusing on video analysis and they only have CNN networks or convolutional networks are supported. So whenever you do anything with time series analysis, for example, it's actually not possible. So just to put that aside, so it's for specific niche these AI chips and you actually don't need them. So because you can run it on CPU and GPU alone. But it's about power consumption a little bit. Exactly. That you could say because there are some chips out there. Really nice. We're working now with a partner from Silicon Valley. They have a chip out. They only have like a power sink of 700 milliwatts. Wow, with a huge output of performance only on the AI side, but anyhow. So for specific tasks, it's interesting to use that. But important thing is that you know that most problems you can solve with general embedded hardware. It's always a question, how fast do you really need to solve it? The thing is, if you think about AI or if people think about AI, they see autonomous driving for example. And of course, you need a millisecond. A kid is running on the street, you want to have it now. And not like oh yeah, like 500 milliseconds later. So then you need it super fast, like very high FPS. The thing is, let's say if you have a door open where you want a facial recognition. Does a person belong to the company? You open the door. If the person stands there for, I don't know, 500 milliseconds or 700 milliseconds to wait for the results. Does that actually matter? It has so many questions. Whether time below milliseconds doesn't matter. And then you actually, in most cases, don't need AI chips. We do have customers with like high speed analysis of tax for example. Yeah, they need those chips. But in general, you can use classic embedded hardware. The idea is, it came up because of the accelerated chips. People suddenly start like, wait a second. Oh, now I can put my AI on the embedded hardware. It was possible beforehand. But a lot of companies never make the link. They thought like, okay, in their minds AI is super computer, super classes. And then this wouldn't resonate with an embedded hardware, right? Because it's super low in performance. But anyhow, from the forehand, there's a lot of misconceptions there again. It's about training. Yes, training needs a lot of computation power. But you do that on the cloud or on PC anyhow. But your finished model, that doesn't need any computation power, almost none. So that is easily to transport to an embedded hardware. And the thing is, yes, for general purpose AI, I would say, like facial recognition with one second or whatever, time frame, easy. Classic embedded hardware is just sufficient. Yeah. But so, I want to say, it was possible beforehand. But now, let's say it's got another boost by this AI chips. And the thing is, the interesting part is that a lot of our customers, at least we're here in Germany, they're also looking for long term sustainability. They want to say, okay, when I have something which lasts the next 10, 15 years because I want to put effort in, it should last. The problem is now, these chips are quite new and you don't know what's happening there, right? So any other given day, a new chip could come out which is a bit better, which, for example, they're working now on chips supporting air and end. So recurrent neural networks haven't been done yet. So maybe quite interesting in a couple of years, but it's not possible today. And also, the companies are new. There are a lot of new chips coming on the market, really good chips, but do they survive? I don't know, will they be bought by other companies? Can you buy this chip in a month, in a year? 10 years. Yeah, I don't want to talk about 10 years, right? Yeah, but I even talk about in a year, right? So the company we are working with at the moment, it's a, I don't know, a startup, not anymore, but kind of startup. They have huge partners, so they make a lot of money. I think they're sustainable. But on the other hand, if Google say, you know what, they have the GPU, we like your company, we just buy it. And then all our contracts we have them might be void. So that is interesting field. So it's, people or customers should get used to the fact that, yeah, maybe you have an AI chip, but we may not there in the next 10 years, right? So it's good to work on some open source, a kind of like software that's easily portable to another chip, if you need. Exactly. So in most cases, how it's done, you create your model, you use TensorFlow or whatever, you create your model, and then you have an SDK or an MDK, like a model transversion kit or something, and that's supported mostly by the company. So you have your base model, and then you convert it to that specific chip, and then you have to integrate it. So actually you're right. Even if your chip, let's say, would fail, it wouldn't make the 10-year mark, probably you could at some point exchange chips quite easily. So one of the established ones that everybody can rely is going to be there forever, oh, for a long time. Yeah. Touch NXP, right? Yeah, interestingly, NXP. They are probably fine. I mean, for example, right here, you have something happening with the IMX8M, that's going to be around for 10 years, 13. That's right. But this IMX8M doesn't have any AI accelerator. I heard rumors, there will be a new chip out there, probably when you post a video, it will be already done that they have a new board with AI acceleration on there. So that should be fine. So with them, you should go quite for the next 10 years because NXP in general is looking for long-term sustainability. It might be compatible with this one. Yes. I heard it's even a bit faster. So I'm looking really forward to that one. But exactly. So but of course, there's also Google and NVIDIA, and the big players mostly that will stay in the game because there's a lot of money to my mates but let's say smaller companies, we just have to wait. Which is the other solutions you're working with? At the moment. Some examples. We're working at the moment with Jeff Falcon. Jeff Falcon is one of the Silicon Valley company, smaller one, but they have huge customers, let's say. So they're going for mobile market mainly. And the good part about the Jeff Falcon is, again, super low power consumption. The problem with all the other chips is the NVIDIA Jetson, for example, of the Google TPU, they have a huge power consumption. So they're getting really hot, you mostly need active cooling or something like that. And for many of our customers, they look like a low power solution. So NVIDIA and the Google TPUs are really interesting to have a first check, like a proof of concept and to see like, okay, now I have an ARM system, I just try if my algorithms work there, I make them ready for ARM, I see how good the solution works, do it, can I solve my problems? And a lot of customers then come to us and say, okay, now we need something more, let's say, not desktop-wise, more professional, more industrial-like. And also it's a support, of course, right? So the difference between, let's say, if I take NVIDIA is that you can call us and say, you know what, I have a specific sensor, I can't get it into my BSP, can you help me? Yeah, of course, we can do that. Good luck calling NVIDIA and saying, I don't know, I need some help, call our support or something. So there are the differences. So many people start off with something like NVIDIA or Google Corel or something, but they come to us because, again, most of them notice you don't need that super high performance for your solution. NVIDIA is very good as a high performance, but on the other hand, as I said, a lot of power consumption, so if you want to lose that power consumption, you go to a different system. And then you notice, what to say? My solution works perfectly on an IMX8 from NXP, right? It's a Quad Core ARM Cortex A53. Do you also work with NVIDIA solutions? I tried it, of course, I tried it, but I definitely don't work with them. So for all our customers of customer projects, I always work with something we have because mostly also they want to have a specific solution for a specific problem, right? So they don't need to, one fits all, they need like, okay, we have that device and that has to be managed in a specific way and we need facial recognition here and it has that sensor. So then, sorry, the NVIDIA would be to general purpose, right? So we can say, yeah, okay, of course, we break it down to exactly what you need and we're happy to implement TensorFlow, TensorFlow Lite, whatever you need of the libraries onto the BSP. And so far, all the customers are quite happy. All right. So what's the demo here? Yeah, so we have two demos. This one is like hand recognition. Can you see it, so? So it's just, it's very simple, it's two hand gestures, hand open, close, and that is actually made with Microsoft Azure or Azure. I don't actually know how to pronounce it. Any hand is good? Exactly, any hand should be good. I think you should, yeah, you have to put in this video. The camera. No, yeah, you have to. It's specific like in this way or hand close, oh, 80%, can it get better? Yeah. And, yeah, 100% perfect. And open hand gesture. Of course. So the idea is that we show here with Azure, you don't have to have any knowledge about AI. So you can just upload your data and then click and play and just download the device again on your device. Then for customers, showing like, this is very simple. You can train it on different hand gestures or whatever else, but of course you can also do it yourself, right? So it's for the people who really just start with AI, just a glimpse of that. And then of course we have another demo where you have, in this case, this is a very simple facial recognition. And first of all, I have to go closer. I get a facial recognition and if you then press a button, it determines your facial embeddings. And from that I have like, I think 10,000 pictures of celebrities and it finds out which celebrity you are based on the Euclidean distance between your face and the face of the celebrity. And that is completely from scratch. So we can show it's also native on there. The first one, we use Azure and also the Azure IoT Edge SDK service. You can see it's actually built in while he has just native incorporated into the BSP. And that's like the two ways. So we can show like, hey, we can help you building it from scratch and we also can help you just have a first start of Azure or something, right? And also the second part is interesting to notice for example, just information for you guys. You notice that the calculation of the celebrity face was actually relatively slow, to be honest, right? What happened is that the facial recognition is very fast. It takes like 0.2 seconds. The creation of embeddings is also very fast. It also takes around 0.2 seconds. So in general that would be taking 0.4 seconds. The thing is, we have an umpire, right? For all people who know that, right? You have an umpire and it's just calculating the Euclidean distance between your face and all the embeddings we have there. So that takes time. And interestingly, if you maybe noticed, that has nothing to do with AI. This is general programming, right? So that was a bit fastly done before the fair here. But so I would now have to put my mind into it how to boost the general algorithm. So the facial recognition and the embedding creations, which is a deep learning, that is super fast even on such an embedded hardware. But the problem is just a classic crawling of an umpire array, right? And then there are also some solutions on how to do that better. So just for you at home to know that, the problem is not always AI, right? The problem is your program in general. And AI, artificial intelligence, doesn't always mean only deep learning, right? It means the whole thing from somewhere to get the data, transforming your data, get something with your AI model already, but then also using that data for something. And the whole thing, that can become quite bulky, right? It depends on your problem. And then you have to see, does it still run fast enough on my embedded system, right? So, how big is it going to be, the AI stuff? Is there a way to, do you think it's going to be revolutionary for the whole embedded world? Yes, I think so. I don't know, yes, yes, for sure. The thing is, especially with the topic of AI, I would say never things like never, because it's going very fast and it's really difficult to see which direction. A lot of people are afraid to say it's all a hype because back in the history, right? Back in the 70s and the 80s, we had already like these AI winters and people are a bit afraid, like, oh, you're putting too much expectance into the topic again and you'll lose again. But seeing what's happening right now, I don't think this will happen. I think it's actually great and it will go forward very fast. Yes, and of course, it revolutionizes also the embedded industry. We saw that already, right? From the last couple of years, I think actually, Intel started 2014 with their move videos, if I'm correct, correct me if I'm wrong. So it started already a bit back, but now we have a boost. A lot of companies creating this AI chip. So suddenly, there's a complete new topic on the market and let's see where that goes. And yeah, in general with the topic of AI, I think it's exponential growth, also like with inventions, et cetera, and where we're going, especially because it's open source, everybody can participate. And everybody has a bit of knowledge of coding, can dive into the topic. By creating such a system and building such an embedded hardware, it's a bit more difficult than just writing a Python code, I would say, right? And so let's say everybody can do the Python code. So we have a huge mass of people working on AI, privately and in business, and that pushes it very fastly forward. And we humans are always thinking in linear terms, right? So, ah, okay, the last five years, that happens, so I can estimate what happens in the next five years. But with AI, I would say this is more or less an exponential thing, so it's really hard to say how long what will take. I have to remember talks with people two years back saying autonomous cars will never come. And sorry, looking to Silicon Valley, autonomous driving is already there, right? And that was in a span of two years. So it's really difficult to estimate anything at the moment. But is there anything that's possible? So I'm really looking forward to that. In terms of priorities for society in general, to be a greener, let's say, is it good to have edge AI compared to more and more cloud AI in terms of saving energy consumption? Or that might be another way to make it a big priority because if you get a lot of small armchips doing a lot of smart stuff in the edge, you can really solve a lot of problems. That sounds really nice. To be honest, I don't believe so because I think it will only generate more devices. I think people will not step back from the cloud because a lot of applications, everything goes on the internet, is in the cloud. So they will put their AI algorithms in the cloud which needs a bit more calculation power, especially also training for the embedded devices will be done in the cloud on your desktop PC but that will take a lot of energy consumption again. But in general, we just will produce more, let's call it smart algorithms, which in general will need more power consumption. So I hope it will be interesting if that would take that direction but honestly I think we just have more solutions if it suddenly needs more embedded things which maybe have been done before with very low power computers and suddenly needs a bit more high power computers and that will in general create more power consumption. I hope I'm wrong on this point, to be honest. But edge AI means more devices because it's going to be more demand, more stuff out there, more things everywhere. And to be honest, this is also the case with embedded things. Again, back to the idea of solution. So I have a problem. I say I facial recognition and so on. So I could take a really low power and MX6 or something to get my solution done but a lot of people say no, no, no, what's there? Oh, there's an MX8 as well. It's a bit more powerful. You know what, I will be on the safe side. It's AI, but maybe I'm not too familiar with the topics so I'd rather go with a stronger one. I don't know if that's happening constantly but any other that would be something I guess we have in mind and then that again will push to power consumption in general. So I hope that people will realize after a while that you take the CPU you need for the problem and don't overpower. I think in that industry and so on, people go this way because it's always about, to be honest, cost and efficiency. So they're like, okay, the slow one is a bit cheaper. Can I solve that with a cheaper device? Yes, I can. So we'll go this direction. But in general, also privately or just, yeah, solving problems in general, go for the solution which just fits and not overpower. That would be one thing to save a bit on the power consumption. Do you have a two or three or four examples of projects that you might be excited about working with customers to implement and what Phytek is perfect for? Interesting. Oh yeah, there's a thing I have to be careful. Maybe there's NDA. Yeah, there's several NDAs to be honest. There's a bunch of them? Yes, exactly. So it's interesting. So I have a range, let's say, I came out more broadly. I have a range of customers and all of them are interesting because they start from a different level. So we have very, let's say, small companies. We have no clue about AI. I love working with them because I give workshops. I go to them and I have like, for example, one general purpose workshop. I display what is AI in general, but also going in detail. How would you program that? What's the layer? What are the parameters, et cetera, overfitting? How to solve that? So it's really from two. And I notice, during and also after those workshops, those people say, oh wow, this is great. Now I'm really elevated to the next level and I actually can start working on my own. This is one type of customer. I'm looking forward to working with them. The other ones are, okay, we have no AI. We have our own, maybe even department or at least a few people working with that. So I don't have to explain them AI. And then it's more about, okay, how can we transfer our solution on the embedded hardware device? And that's also interesting to local key, the problem. Okay, what solution do we have? How do we get it on the embedded hardware? That's also interesting to see. And what hardware do we take? And then, of course, the last one, which is like, you know, we know all. We just want to buy. And that's also nice because this is quick and easy. Yeah. And also, how easy or how hard is it for an embedded software developer, embedded hardware developer to get into this field? It's difficult. Some people would say difficult. I would say medium because they are bright people. They're smart people. They mostly know how to program. Especially if you know how to program Python. Yes, it should be easy. I mean, it's easy for me to say because I come from a signal processing background. That was my study. So my drive into AI was a bit more easy because actually 90% of your work, if you go on AI, is actually signal processing. Until you have your data nice and clean and then you start with the nice part, creating a model, right? And that is actually not, let's say, too difficult. The whole thing can become difficult if you have no clue about signal processing. There are what is good. Identifying a problem. Identifying the solution for the problem. Going that to where this is actually what you have to think about. Programming the deep learning model itself. You will find hundreds of tutorials and if you're already an embedded hardware programmer, you're a smart guy. You will figure it out. So that's not a problem. The problem is more the whole thing. You have the problem and find the solution and all steps on the way. There are many steps which can become difficult, right? How to actually collect your data. How to clean your data. Get to write it. How to synchronize your data. People think about that. How to synchronize it. This is a step. Okay, tip from me now, from the fear. If you're thinking about AI and you have multiple sensors or a camera and maybe sound or camera or accelerometer or something. Before you start anything, think about how you synchronize your signals. If you have done that, you save yourself a lot of work in the long run. So, yeah. Do your signal processing good and you will have fun doing the AI part. How do you process the signals good? How do you process signals? This is a very broad question. It depends on what the problem is, of course. But let's say, yeah. If you have a solution you want to solve with video analysis, for example, right? So, how many frames per second do I need? Which resolution do I need? Do I need it in color or gray scale? Is the information in detail, then I need a higher resolution as a detail model? Is it just like person recognition or something? Then you can go really small with your data. How about not a number of values? You have to kick them out. Again, synchronizing data. Normalizing your data. Multiple small steps you have to go through. But the difficult parts are mostly getting rid of noise and synchronizing your data. That are actually huge problems. And then, of course, don't forget that annotation. So, for supervised learning, which is in most cases you will go to the direction of supervised learning, you need annotations. Meaning, you have to tell your computer, this is a picture of an apple, this is of a banana. Banana and apple, that might be actually easy at this point. But if you go for image segmentation, for example, suddenly you actually have to do every image by itself and segment it. This will take a lot of time and effort. You have to do it properly. The thing is, if you have a general problem, that is easy. Because you probably already have data set online, which is more or less the same. You, your problem is. But as soon as you go special, let's say I work with pre-tum infants, right? My PhD was on pre-tum infant sleep analysis. The problem is theirs. There is no data set on pre-tum infant sleep analysis just there out there, which is annotated for sleep states. You will not find it. So you will have to get professionals in there. You have to build tools for the professionals. You have to get interactive availability. You have to synchronize all the annotations with your data, et cetera, et cetera. That will be the problem. So again, as you see, per question, there will be different problems. But as always, I have to keep in mind, let's say, good to have in mind, especially for people who are in management positions, if you say like, oh wow, we want to do AI now. Give your employees a chance to do the pre-processing properly. So an AI project would be 90% pre-processing your data and collecting data and annotation, et cetera, et cetera, and then 10% creating your model and get your inference model, right? So in this scale, so even after six months, if you got like, oh, where's my model? Where's my model? And they still tell you, oh, we're still processing data, et cetera. Don't push them too much, let's say, and don't be afraid that the next part will also take six months. No, no, the last part will be much, much faster. So just be patient. Are there like, what do you call them, libraries of data that's already annotated provided by companies like Google or something like that? Exactly. Actually, it's mostly all open source and thanks to the community actually for that. There's the ImageNet, there's CocoaDataSet, there's AmnistDataSet, there's hundreds of data. Actually, if you Google open source data sets, you will find hundreds of them for really specific problems. It's great. And to be safe, there's a people at their time to do that for an open source community. So I'm really grateful for that. And actually interesting, what's really interesting is that GitHub is working just on a project at the moment that you can upload your data, but it will be anonymized on the way up and so you don't lose your company secrets. So if you take from the community, I hope you also will give back something to the community in form of tutorials, how you did it, algorithms, maybe your data, that would be actually really great. I understand that not everybody, every company can just give the data away, but if you can give something back, that will be actually awesome. Yeah, and because we have to say, AI wouldn't be possible without open source, TensorFlow, open source, Caffe, whatever, all the programs, Python in general, everything's open source. So you use that, you have to say, we built on a lot of work from private people, a lot of energy from private people, so try to give something back. Are the winners in AI going to be open source or proprietary? Yeah, it's open source, yeah, because everything's open source. It's not like a big company like Google is going to own the whole AI future. No, the thing is how do they earn money, for example, Microsoft was very clear about that. They say, program your solution however you want to, we don't care, as long as you come to our servers and calculate your models on our servers and pay money for the calculation power, right? So that's how the companies now make money. So they rent you their servers and you pay them for their computation power. So actually for them it would be really good if it's more open source, if you can do more and more and more because the more solutions they can be solved with AI, the more computation power is needed and then they can sell you that. So actually the big companies are interested in the state's open source. There are some companies, I don't want to name any names because I don't have bad blood or something, but the classic software development companies which were closed source, very good, and then they start now to this AI because they started panicking a bit, noticing like, oh wait a second, we're losing a bit on the libraries on AI. I have no experience with that so I don't want to say anything false, but I can imagine that there are always, there might be always a step behind because everything you find which is really just up to date is always open source, right? Okay, of course there will be Audi, BMW, Mercedes, Tesla or so which working on the autonomous driving software, that is of course closed source, right? But I mean in general, the libraries in general how you would do something like the latest facial recognition model or something, that is always open source. You were mentioning about an important part of synchronizing the signals or the data. How do you do that in a good way? What's the strategy for synchronizing perfectly? There's a simple way to do that or not simple but standard way to do that. And also unfortunately, I wouldn't say so. The thing is of course, people always say clock, right? So you would have every signal which you produce should have an internal clock but the idea is to get your mind into your problem beforehand and think beforehand how will I synchronize my data? Because if you want to record your data then it becomes really difficult to synchronize. So either you make sure that your clocks are actually synchronized beforehand and say, okay, perfect, I'm sure that my clocks are all signals are synchronized to each other then it's fine, you just have a timestamp. Wonderful. If that is not the case, I can give you an example from the Pre-Demention case when you can understand it better. So we had video analysis but we also had ECG and EOG and a lot of signals and we had to synchronize them. And they thought like, oh, we are super clever. We just pluck. The question was like how to synchronize a video with the rest, which the rest could be getting into one box or something and then be recorded simultaneously but the video would be separate for example. And then, okay, how to connect the video to the data? And they said, okay, we just de-pluck and put it back in and then we have a noise peak and we just find the noise peak in all our signals because there was a bit of time shifted in the box and then we can synchronize. Very nice idea. The problem was the signals themselves are so noisy that it was very, very difficult to find the artificial noise peak, for example. And these are the things to think about, okay? How do I create something where I can identify that that is the same time point in my signal? And the problem is again, this is not one fits all solution because there might be, maybe your problem is very simple and you don't have to do it. Maybe you only need a video stream of your screws and you want to determine if the screw is good or not. You don't need to synchronize anything. You only have to synchronize annotation to your data but that you can do as a tool, right? You create a tool where you annotate. That should not be a problem. But as soon as you have multiple sensors, then the problem arises and then you have to think about how to synchronize before you actually start recording. Do some test runs, trial runs. I mean, I come from the medical part where it's very difficult to just set up a new trial to just create new data so you have to think a lot beforehand but that saves you a lot later. And also, with parts where you just can create new data, it's always good to have your solutions beforehand and then start recording. Because also in the nature of humans, we don't like to go back. So if I want to create my data, I'm already starting with creating my deep learning stuff and then I notice, oh, my data is not synchronized. Then I don't like to go back because in my mind I just made a check mark on the, yeah, I did my data fine, fine, fine. I already told my boss I'm doing the model now. I'll give you an update in a month or something. And then as you go back to your data synchronization and then the months you say to your boss, sorry, I'm back in data synchronization, that's never good, right? So stuff in the front, start with thinking about the problem, think about the solution beforehand, take your time. They want to take your time with that. Tell your boss it will take time. It will take the most time and don't worry. The AI part later, the deep learning part later will be a bit faster. And that should be the solution. That will take your time to think about the problems properly. So without trying to get any secrets or NDAs or anything. But let's say Phytek was developing a solution with a customer that has to do with sleep tracking or something like that. What would be the perfect implementation? What's a great idea to use AI and sleep tracking or something like this? Yeah. That would be great, actually. This is my favorite topic, treatment from sleep. It's very difficult, unfortunately, to solve that problem. Please continue to work on that. Anything which should be separated from your cloud, let's say. You don't want to have a device which you always have to plug in and have an easy connection for something. You don't have the means to send a data stream or something like that. That is always the idea for edge processing. And also, of course, security. Whatever you send can be intercepted. If you only do it on the edge, somebody has to go to your device and exchange an SD card or something or hamper with your device itself. So as long as you don't send anything, then you're fine. If you're afraid of losing your company's data security or something like that, do it on the edge. It's really difficult to intercept here. Running on batteries, for example, that will be, of course, edge solution. Everything which is a difficult location, right? Deep down tunnel, underground mining, for example. How would you send data to anywhere, right? So then it would be maybe a solution of a kind of fork, edge fork solution where you have a mainframe somewhere in the underground of the mine and you send some data to the mainframe. That would be possible. But it's even better if you just have the device on your operational devices, your edge device on the operational device and you say, oh, you're mining here. We want to identify what you're mining and I found just coal. Just give a signal to the operator we found coal. Perfect. You don't need any coal. You don't need anything to send. Where you need it, you get the information. So perhaps it could be like a sensor on the bed that's IMX8M and that can sense there's two different people sleeping in the bed, maybe identify who's who. And also how long you sleep. It's really important to sleep to recover. There's a few talks here at this conference about the coronavirus. I heard that if people sleep good, it's the best protection for any kind of flu or whatever. It's just sleeping because the body is repairing itself at night. So if you can do that on the edge and then only talk with the cloud with what's relevant. Exactly. You could maybe develop something there. The thing is that would be actually quite interesting. Sleep is generally right. Your immune system is actually regenerating during good night's sleep. So if you don't sleep, your immune system will be overloaded and then you're really prone to diseases. So sleep tight, sleep good. But it's actually a good point. Recording for example somebody in the bedroom. Nobody wants to have a video stream of themselves in the bedroom somewhere at Amazon or Microsoft or so. That is also an edge solution. Yes, maybe to your private cloud you can send information. You have that in a sleep pattern but no video stream is sent anywhere. So you can do your video stream on premise and just analyze the people. No video is going anywhere because only the solutions may be sent to a server side. And also especially in Germany with our ruling here, DSK for all and so on. Then it's interesting to not create people and to not recognize let's say to not identify people. So to separate that from each other. And about the sleeping pattern that would be generally interesting to see sleeping patterns of people. And also I'm always dreaming of the robot at home whose combination of several AI identifying if I'm sick seeing what's in the fridge and going to the shop and buying stuff from me. That I'm looking really forward to. Please continue working on that. But also there this is far, maybe not that far out, but it should be far out, right? 2021. Yes, exactly right. That should be done. But the idea is also here. I actually wouldn't like that my the robot would have all the information and send that to Microsoft or Google the robot company owner, right? And then I was like, you know what they know everything about they do already but now they know video stream whatever they're all about me. So that could be actually hampering also for robot hampering selling point, right? Actually, I don't want to have a robot in my bedroom and send all the information to everywhere. So if you then go at yourself no, no, no, nothing is said anywhere because only maybe the models are updated if the robot is once in the shop or something, right? So actually no data is sent anywhere. Just the robot has the model in itself, it operates on itself it can have a solution it gets a solution and operates on that solution but no data is leaving the robot that would be actually, I think that's a key point for that direction of robots in our home because I know at least in Germany a lot of people are kind of afraid as a rule, I don't find the right way but Alexa, right? I know a lot of people don't like Alexa. They might be bought at first but then they know for a while it's like it's listening really on anything I think here and they get rid of it, right? So that is happening and the acceptance at least in Europe, I know Asia is a bit different the acceptance of robots in Europe is really dependent on where is my data sent, where is my data stored so if, yeah they know it already probably who is working on that, you know already but it's a solution, it's a solution for not an enormous robot in your home but yeah, so that might be also the future actually for embedded systems, they say everybody now don't have a car, everybody has a robot so we'll have a lot of embedded hardware now to sell The robot could be like helping people live a more healthier lifestyle and advising on sleep, advising on food advising on noise, any kind of thing and just saying, hey now it's a good time to work and now it's a good time to relax and just be on the side there like a coach Taking over my programming, because I can do it better probably at some point, nobody will write exactly, so anything you can imagine now with a small app for example, my food intake how many calories did they eat today because I want to get slimmer and fit or something like that yeah, a robot can easily take over that, because what would be a robot, a robot would be let's say a computer system with a vision system which can walk and has hands maybe can touch things right, so it's not that difficult, it's not that much different from a smartphone, it looks like different but actually computation etc is more or less the same it would be apps probably running in the robot saying the health app, okay I'll look at you identify you and tell you, hey you look sick right or something like that I think that the problem only there would be like touching things and etc that might be mechanical problems more but I think they're working, Boston Consulting not consulting, Boston Dynamics is working on that, so I think actually we will see something like that rather soon Lara soon and later too yeah