 Okay, thank you very much. That's a really hard act to follow, but I'm going to try So my name is Jim Stewart, and I'm here with my colleague Heather Gore. We're from the math works and we're going to talk today about Deploying a AI for near real-time manufacturing decisions more specifically we're going to talk about the subject of predictive maintenance predictive maintenance is the science of monitoring industrial equipment and for faults and also predicting the remaining useful life of those assets and There's this really of specific kind of technology stack that you need to build to be able to do that right and we're going to walk you through an example today that builds up a whole system from scratch and You know shows shows how to do it so We're going to walk you through an example problem today and and What we're going to do is we're going to develop and operationalize a machine learning model to monitor failures and industrial pumps and The way that we're going to do it is we're going to actually walk you through the whole a whole project that we did to actually build this system and as we're talking we're going to be giving you the the the progress of our project through the perspective of a couple of different personas The first persona in our in our system here is that what we're going to call a process engineer And this is a person that uses MATLAB and Simulink to develop models for for their assets And that role is going to be played by Heather and she's going to walk you through the whole process of building models The second persona that we have in our project here is what we're going to call a system architect and this is a person who's Whose job it is to basically take the models that Heather develops and operationalize those in a production environment so that we can actually start streaming data from our assets through them and I'll play that role myself and I'll I'll walk you through the the process that we we took to build that system and Our third person is actually our customer. It's actually our end user and we're going to call him the plant operator This is the person that needs To take the output of Heather's model and make decisions about it And as you can see he's he's kind of unhappy today, right? He's he's got he's has a lot of problems with his with his equipment and and hopefully by the end of this session We'll put a smile on this guy's face Okay, so the first thing we did before we started our project is we met with our our customer and We talked to him about some of the things that he that he needs in the system And we decided that we were going to build we were going to build out a full Full-working system that we could put in front of him in a three to four week sprint And so the first thing we did is we got his requirements. So he basically has three Things that he's interested in what the first thing that he needs to know is he needs to know the operational state of All those assets and he needs to know that in kind of near real time, right? And he also needs to be alerted when there's some kind of failure happens in the system And the other thing that he needs to know is he needs to know Some accumulated values that represent the remaining useful life of the assets so that he can make decisions about replacing and and maintaining the equipment now before we start we actually have a lot of Constraints and a lot of challenges with with a project like this the first Huge problem that we have is we don't have a large set of failure data and it's very costly to generate Failure data in our plant and we're just not going to do that So what we're going to do is is we're going to build a very accurate physics-based model For the asset and we're going to use that to generate synthetic data that we will then use to train our machine learning model The second big constraint that we have is we don't have a large IT and hardware budget And we really need to show some results before there's going to be a commitment To a particular technology or platform So what we're going to do is we're going to we're going to leverage the cloud to do this We're going to use as many pre, you know pre programmed solutions as we can and we're going to operationalize these things with some of our products and Another this is also a really big challenge with projects like this is these these projects like this tend to be very multidisciplinary We have multiple domains that we have to we have to have expertise and we have to be experts in signal processing physics machine learning we also have to have expertise in IT and And and system you know system design and things like that And so our solution for that is to use a platform that brings all these things together and integrates well with with cloud Services as well as best and breed open source software So this is a quick look at the architecture that we're going to build I'm not going to take I'm not going to describe that too much right now We're going to come back to this a few times during the talk and What we've done is we've built out and we've built out a system that basically has four basic parts over on the left side There's our assets which are basically going in this case are going to be our pumps They stream data into a production environment And and this is something that we've built on on the Azure cloud and on the upper right part of Quadrant we've got our development environment. That's where Heather's going to do her work and build her her model and then finally in our lower right-hand corner We have our end user and this is this is actually on site in the plant And we're going to build this guy at dashboard so that he can he can he can you know Understand what's happening. So what I'm going to do now is I'm going to hand it over to Heather And she's going to walk you through the whole design and implementation process of her model Thanks, Jim So just to kind of step back and look at the approach before we dive in It's very similar to similar problems like this that you probably face daily, right? So you get your data Pre-process will spend a lot of time talking about the predictive modeling part and then the integration part and not just Integrating but what sort of things you need to think about whenever you're building the model You know thinking ahead to the next step and then ultimately what sort of results that we need to visualize For our plant operator to make the right kind of decisions And so again, I'd love to just dive in but I need to kind of step back and review the requirements And it's a big project and so our operator needs Blocking you know, he needs to know what kind of fault is happening. So he knows how to react, right? So if it's blocked it's got a leaking pump and then like Jim mentioned the continuous remaining useful life So he always knows you know can prepare ahead for each of the pumps Then there's different requirements from the system architect So I need to think very carefully about the window for streaming, right? So I'm gonna be getting in packets of data or you know one chunk at a time And so I need to think about that before I even start implementing the model because I need to make sure that this is gonna Be reasonable enough, you know enough data with enough time resolution to build them a model like this Then we'll also have to come to an agreement sort of as a team on the format of the results You know, we're gonna be passing this data into a bunch of different systems And so not it's not just for me But you know we want to make sure that this is a good structure, you know json so we can pass it around to all the different systems Of course, we're gonna have to scale this out I'm gonna start with one or two pumps But you know we have hundreds of pumps in our plant and then of course it I'll be a responsible citizen and test my code before Passing it along or you know pushing it And so, you know we've talking about this pump Let's just kind of think about what is this thing before we start modeling, right? So it's sort of a typical sort of pump You can imagine inlet outlet you can see it pumping along But there are importantly there's three different types of failure modes, right? So you could have you know friction in the crankshaft so some bearing friction also leak You know obviously that's that's bad And then a blocked, you know valve or blocked inlet And so those are the three components or the three failure modes that we've identified and that we can use in our modeling and So in reality, we're actually gonna have eventually we'll have a real pump And that will have some sensors on it And then we'll also kind of be able to Justify our algorithm and you know make sure that we're informing the right kind of diagnoses And so at this point, you know we got to start bringing in our data really looking at it And so I sort of hinted at this we actually don't have a pump We're a software company. We sort of pump software and knowledge and mathematics into the world But you know we actually will you know collaborate her perhaps with someone who does have a pump but so it's really great because You know he mentioned he said you saw the operator. He wasn't having a very good day We don't want to break all of the pumps just to get some data to build our model, right? So we can simulate it and we have a good knowledge of the physics of this pump You know, there's many pumps like this. We have a CAD model. We can just bring that in and actually start simulating And so, you know, we can use a nice Digital twin for this and see it's pretty close and then we'll introduce these faults And so again, this is a much better way to build out failure data than literally just breaking all of your pumps and going with that And so just to kind of show what this looks like as it's being simulated. This is our digital twin built in simulink Again a vast kind of physical modeling tool and again Here's where we can see Introducing our faults and so this had a blocked inlet and a leak And so we're gonna do this over and over and over again in order to build up our data set And it's big data Spain. I should talk about the big data aspects, right? And so, you know again, we want to make sure we have a really good data set We can span all of the ranges that we believe are faulty and we'll just we'll scale this out, too So even our simulations we can run in parallel. We can run on a cluster You know that these are pretty these can get pretty big intensive jobs Then it's actually it's pretty easy to just bring it into MATLAB and start messing around But again, this is a bunch of data So you again sort of have to think ahead of where we want to store this how we're gonna share it with a team And in this case I decided to store it on HDFS Mostly because I was wanted to take advantage of spark for machine learning and so, you know thinking about Pre-processing, I think lots of us spend lots and lots of time doing this Some of the time pre-processing is actually gonna be done Jim will talk more detail about this But it's actually done through Kafka and they do some of that ahead of time So it saves me a little bit of a headache But I still need to do some time series pre-processing like synchronization, you know making sure everything is on the same You know scale and then of course normalizing, you know get kind of your typical machine learning prep But in this case we've got sensor data, right? And so there's frequency domain I spent a lot of time in the time domain myself so when I need to enter the frequency domain I try to visualize and try to understand what might represent my signals the best and so, you know You can kind of explore a little bit But ultimately I ended up using sort of your standard feature representations your spectral information, you know RMS values Statistical values those kinds of things Again, we're just kind of pulling out all that all the features that matter And so we're take a step back kind of where are we now so we've done some labeling from our Simulated data represented our signals. We're gonna start training some models We'll validate we're gonna do this on a smaller subset of data So I can kind of work, you know on my desktop or something local and then we'll scale this out Like I mentioned would you know use Spark and do that machine learning pretty easily there And so thinking about the models we mentioned that the plant operator needs the type of fault So is it leaking is it blocking that sort of thing and so that's classification problem And then also the remaining useful life is a regression problem, right? So this is sort of an exponential degradation kind of problem. So that's you know numeric regression so for the Fault type One of the things that I had some indication of what I needed I know that I want to you know not use the flags that I had I just want the actual You know a response value. I thought this might be a non-linear problem But I can actually just try all of the machine learning models. It's a three week sprint I need to kind of get where I'm going really fast, right? And so I can try a bunch of different models You can see already that there's some separation and so that indicates that I'll probably you know be able to Come up with a good model So, you know I kind of explored around a little bit just to kind of again Make sure that the data was separating enough to make sure our features were right take a look at the confusion matrix Looks good enough for especially for a three week sprint, you know Like yeah, I've definitely come back to this later and you know try to get better results But at this point I'll just save out the model and then I can actually use it in the next step to start predicting on the real data And so that takes care of the classification problem What about this remaining useful life? And so it's a regression problem like I mentioned And this video kind of shows with with this problem every data point We're gonna update the model So we'll update the state of the model and as you can see it actually improves, you know as it gets more data You know the confidence interval shrinking so we're we're in good shape But again, this is something we really have to think about and even as a team because we need to build this into our Architecture so we're gonna save out the state and then again apply the new one every time we get a new data point and so, you know, I sort of keep talking about this next step That's the streaming part where so we're gonna have incoming data Not as much as we had to build the model So so far we've been talking about batch processing right and so we had all of our historical data Which was simulated trained our model scaled it up and then you know started making some predictions But again, you need to think ahead a little bit about the incoming data as you know It's it's coming in a little packet at a time I'm gonna apply a function that captures all of the things I just did and then, you know update the state like I talked about and then push out the information to the Dashboard and so that our operator can take a look And so we'll dive in now to the streaming part Like I said, this is basically what I just kind of walked through so we you know did our pre-processing our predictions You know We're bringing in that chunk of data in our case. We decided on one second because we have very granular resolution, so we want to you know, make sure we have enough and We also pass in the old state and so again, we'll later in the code We update the state and we write the results back to a stream so that our operator can see everything coming in as it goes I have promised I would test and be a good citizen So I can you know, I set up just a couple quick unit tests Very straightforward, but we also could test this in reality, you know in our full architecture And so there's Jim's actually going to show this once the more of the architectures build out And so again, you know, I'll test locally at this point But we'll come back to really more serious testing to make sure it's ready to go At this point I package up what I've done so I take my streaming function. I take any dependencies You know, it'll find these in the app and then I basically pass this along to Jim And so before I do that, I want to make sure I check in with everybody With the operator. I want to share the results. So I need to make sure that these You know the data look right and the fault values look reasonable and then of course with the system architect I can just push the code, you know, we can just use source control. No big deal And so now I'm actually going to turn it over to Jim. He's going to walk you through building this out in the architecture Okay, thanks Heather Before we start diving into the architecture I just want to take a quick review of our requirements that we have and there's some things that I need to know before I start Doing my work here we just saw a lot of Signal processing and a lot of algorithms that that Heather designed and these things actually have implications The her choices of algorithms and techniques actually have implications on how We need to present the data to the to the code in order to get the right answer Right, so we know that we're gonna have a very fixed timestamp We know that because of the frequency resolution we have we need a very specific size window or at least a minimum size window and I've been told that initially we're gonna prototype with a few But eventually if we if we're successful and we run this on our whole plan It'll scale to hundreds or more. So so the technology choices that I make actually have to Have to be things that we can actually scale up if if needed And of course our requirements for our operator our operator needs a good visual Dashboard of some kind so that he can get alerts when when there's failures he can and get Have a way to just query the remaining useful life of his assets, right? So we're gonna make sure we get that in there Okay, so What I'm gonna do now is I'm gonna focus really on the middle box and I'm gonna I'm gonna for the next several slides I'm gonna actually talk about how we built the system out so the our production environment has five basic pieces and Our first piece our first piece. I want to talk about is Apache Kafka. We decided to use that To to ingest our data and we chose that because it's a it's a very robust system And it's and it's also it's also sets us up in the future if we really need to scale up We'd be able to do it We built a connector for Kafka that feeds the data from Kafka into our MATLAB code and And and and implements a lot of very important features that we need to make sure that we get it right And we deployed that on Azure using Docker. We built a microservice We have we we deploy it through Docker using On Azure using a feature called Azure container instances and we found those to be very easy to work with it's a very nice feature in Azure that lets you Deploy just a standalone container and it's really good for these stateless microservices that you need to build to be able to connect stuff together We're we also a provisioned a MATLAB production server MATLAB production server is a Product that enables you to deploy your MATLAB code as a function as a service right and and and the function of a service style of Deployment is a really nice fit for stream processing and so we have this tool That we can use to get to fit really nicely into this architecture And as Heather mentioned, we need to update this model. We have a model that needs to be updated It's a stateful model that needs to be updated We need a very high-performance kind of low latency store for that Because it's it's it's what we typically refer to as hot data. It's something that we really need to be available so what we did is we provisioned ourselves a Redis cache in Azure and We we built some connectivity in to be able to talk to that We also Want to ingest all of our data as well as our results So we also built a we also there's an arrow going from production server back to Kafka We're actually going to feed our results back to Kafka as another time series and We also For the benefit of our dashboard We also have a storage layer that we need and and so that what we we can do is is we can durably store all of The the the raw data that's coming in as well as the outputs of our model And we've used elastic search for that and not not shown on this picture We actually use the technology we used to get from Kafka to elastic search was was we use Kafka connect Which is a very nice Connector from Confluent that lets you dump your Kafka data directly in elastic search and it indexes it for you And it's a it's a we we found that to be very helpful okay, so I'm going to take a start taking a little bit of a deeper dive into some of the components we use This is Matlab production server. We've made it available on Azure it's easy to run on Azure with the whole things driven by an arm template and we provision a whole stack for use that you can get up and running and We also provide there's a lot of connectors available for production server That enable you to connect to your data. So we have streaming We have connectors that that'll that'll put streaming that'll that'll attach streaming data to the server And we also have connectors for storage and databases as well So I'm going to dive a little bit deeper into our coffee connector This is the one piece of the system that we actually spent Some some serious time on right we we had some very specific Requirements that we had to meet for our streaming data. So what we did is we built a Very simple connector, but but powerful connector that that we deploy as a microservice Using docker. We also built a publisher for Kafka a very simple publisher that just lets us push our results back to a derived stream and There was a lot of there's a several very important requirements that we had to meet With this part of the system First of all production server is an application server and it uses HTTP So we had to kind of bridge the difference between streaming data and request response So what we had to do was build a system that knew how to batch data into the proper batches and send it to the server We mentioned earlier that that the algorithms were using require very precise Definition of time and when things are happening in the system. Okay, so we can't use ingest time as Our as our time stamps We actually have to take the time stamps of the events that are happening on the pumps themselves And we have to make sure we put them back together in the right order because they because in a production environment These things may not always arrive in the order that they were produced Out on the edge and we of course we need to bucket these things into time windows because we're doing signal processing and we need very specific a Time definition there. We also need to do asynchronous processing for two reasons Our processing on the servers done asynchronously for performance reasons number one But number two we also have an interactive workflow that we want to support We would like to support the ability for a developer to debug On live streaming data and that's actually a really hard problem And so we decided that we really needed to implement a little bit more a more advanced Execution model than you'll find with a typical Kafka connector And you know, of course we have a lot of orchestration that we have to do under the covers We have to be able to manage Particularly for our state store We have to be able to manage the the keys correctly in our Redis cache so that we're not stepping different Partitions aren't stepping on each other right and we also want to make fully exploit Kafka's partition model so that we can Scale up in the future if we have to and what we also want to the other requirement We had was to pass we want to pass our data as There's a as a matlab timetable a timetable is a matlab data type That is is very well tuned and designed for Doing time series Data and so we wanted to make sure that we could make that really easy The last thing I want to do is go back to Heather and say oh by the way Heather We you know we we created this pipeline that that put some data in but you have to go rewrite all your code now Because we did some weird stuff right we want we want her to be we want to be able her to be able to Deploy her code as she uses it on her desktop When in her development environment, which is a lot more constrained and we want to be able to solve all the all the hard problems To reconstruct the data in a way that she needs to see it So that's what we did and this is kind of a picture of the architecture of our Kafka connector It's a it's a pretty straightforward problem for those of you who are familiar with Kafka. You've probably done this before We read we have a pool of consumers that reads from top from a topic We do a we do a partition grouping first and then we do a second grouping which is based on on windows And we we manage the timestamps and we when we're sure that a window all the day We when we're sure that we've seen all the data for a given window We release that to the next stage in the process which is an async request handler that manages all the orchestration of asynchronous Processing including the case where I might want to hit a breakpoint somewhere in a debug environment And I need to stop the whole world for a while well I sit and play with some data right so so those were there are basic requirements. We did an okay job with that and The the consequences of that is the programming model that we ended up with for the user is actually quite nice We basically have an input stream. We do something to it. It turns it into an output stream and The user model in MATLAB is really just working with timetable data types So the user gets to work with things with the kind of data types that they're used to and we take care of all the hard Problems behind the scenes for them. So what happens is a window comes in we apply the MATLAB function. You saw this earlier We we produce an output that goes into our output stream and we store our state This is our model state that we have to update on each iteration We store that into our redis cache and we just keep doing that we read the old state in we apply that to the function We produce output we store the state again, and we just keep doing that right and that's how our stream processing logic works so what I'm going to show now is Before we actually are ready to turn on our production system We actually do want to support debugging in the production environment right because there's a huge leap when you when you're on your Desktop and you're in production. There's this enormous leap that you have to take You know because the environments are are very very different and so what what we've done is We've developed the capability that lets you just debug your stream function your stream stream processing function in in your desktop MATLAB and What I'm showing here in this video is We're going to open a project. This is the project that we actually used that Heather showed earlier that we used to package our code We're going to actually start a session and what we're going to be doing actually is consuming both streams now into our MATLAB We're actually consuming the raw data We're publishing the results of the raw data back to Kafka and then we're consuming that Result stream also into the same session. We're going to debug both of them So what I'm debugging right now is my input data I can inspect my variables and here's a view of the table that I showed earlier of Live data as it's happening While we're we're at a break point here. We've actually paused all the all the Kafka topics So they're not they're not complaining there. They know what we're doing. We can also look at this our state this is our model and What I did is I created a little dummy function so that I could publish my results stream I want to just look at what I'm actually published publishing into my results while the stream is processing and This this allows me to do that and in this case My results stream is actually very simple. It's only a single row Right and but I can I can look at them I can actually look at the data in tabular form and I can I can Reason on it and I'm gonna miss this I'm gonna assume that this data looks okay I actually don't know but it's it's a good. It's fine. So What I'm gonna do that next is Finish our application the last step in our process is we're going to We're gonna we're going to build a cabana dashboard that we can present to the Plant operator so so while all this was happening. We were ingesting all this data behind the scenes through kind of a cold path Into elastic search and so what we're gonna do now is we're gonna we're gonna we're gonna finish our sprint We're gonna produce a basic dashboard and we're gonna share it with the plant operator The plant operator isn't here right now So I'm gonna let Heather take take take control again, and she's gonna she's gonna walk us through the last few steps in the process Thanks, Jim. Like you saw the plant operators still back home dealing with his pumps and his all of his headaches But hopefully this will help him because now we've actually got a dashboard. We don't have Wi-Fi I don't look right now for this very talk, but we're actually right outside So you can come and check it out As the plants operator they tell me they chose cabana because of the time series Visualizations and the ease in creating those and we can actually now make some decisions And so this will show us the live streaming data We can see how many pumps are being blocked or leaking and then we also can get a good estimate of the remaining useful life Of course, we'll probably want to build this out even more whenever you know Once the plant operator starts to really get his hands on it once he's done mopping up all the pump failures And so, you know bring Jim back here. Let's have a team retrospective, right? We do this at the end of every sprint and so we did it That's the most important thing we literally did it three weeks and we built out the entire architecture with all the models And it works enough for a three-week sprint But really, you know, so the how did we get there the digital twin was super helpful again We don't want to break all of our pumps so we can Generate the data train the models and generate loads of data so we can actually be pretty confident about those and then you know You saw how fast it was for the prototyping of course I want to go back to that but I at least have something that I can just stick in production So I know that it's going to work for one thing and it's going to give some kind of reasonable result So then I can come back in the next sprint and really kind of dive into the model a bit more and Then of course, you know Jim showed a lot of the you know It's fairly easy to kind of pull this together in a cloud platform and use, you know, the best in class architectures for this and so again, you know next steps will obviously take a look at that model that was like five seconds I think you usually spend a little bit more time on machine learning And then we'll get our real pump data make sure we test out our system again And of course, you know do some customization a little tweaks probably in the architecture to make sure You know things are stitching together well security those kinds of things but We did it. We're gonna celebrate we're going out tonight for a tapas. Please join us and If not before then we'll see you. Yeah I'll pay. I'm very happy about my models being successfully Actually, the plant operator will pay because his headaches are gone and so We again, we're right outside here So we invite you to come out and have a mojito and chat about some of this and we can show you the actual dashboard Real time. I think some of the guys back home or even working on it as we speak So there's some resources, you know the reference architectures are on GitHub You know, there's lots of examples and things to walk through and really look forward to talking to you So please come visit Thank you Probably have time for questions. I don't know if that's the thing. Yes. Yes. It's a thing So if anyone has any questions, we could do that That's the moment that you ask the question. They make you the question and they Put you up improve Okay, so they taste you that's okay So I'll just have to get one question. Oh, yeah, good He's the guy that had made more question in the whole day with that this one three in a row I'm bar always so he gets an ape for participation. Yes. Yeah Thanks for the for the talk. Yeah, I like to do some questions because well, it's interesting and we love them You make question. Thanks Well, my question is as as data scientists We oftentimes work in in an office and such and and for this particular project Did you go to the actual plant or the physics person on your team go there? Do you actually move there and consult with the person with person person? That's a good question. You want to hit I mean, I guess we could both handle that, right? in in in real life We would do that right and we work with lots of customers directly and we have field people that go and And work with them and and what you're seeing here is a very agile approach to this problem, right? What we're doing is instead of going to a plan instead of going in meeting with with a customer and getting You know six months worth of requirements documents What we did is we got a few we asked a few questions and we came back And we came back three or four weeks later and showed him a piece of software Right and and that's really the the I think that's I think that's really the way to do this, right? And what we did for our part is is of course were the math work So we have all these tools available to us, right? we we we basically took these tools put them all together and and and you know did it this isn't actually I Actually it is is this a real cut is do we have a real? Yeah, so this can we say yeah, I believe yeah, so there's we can come and talk more. This is a baker Yeah, so that this was okay good Baker Hughes, so we're allowed to talk about Baker Hughes. Okay, perfect Yeah, so we have slides showing showing all that but there there are you know We have lots of physical models that are definitely informed by lots of research and lots of knowledge from the you know community We you know like I said lots of Jim mentioned probably field people, you know in it, you know build up this model over a couple of years Maybe but yeah, we just kind of took it and put it into production great Any more question Yes, more question It's something like what he has guys. It's about the not the modeling but the use of the predictions thinking into account the data from the sensors in the real world because You you can synthesize the model with a simulated data and that's great Because you know a very good Physics-based model and it's great But when you take data for the real world is quite dirty is a lot is more dirty than data and how do you? Take all these noise. How do you mobilize the noise in the inheriting the in the Data from the real sensors just to match him with the model in the synthesized I don't know if you model the noise in the Physics-based so you can match and but it's not only white noise because it's not that it's very non-linear systems Because it's a rotatory and it's quite difficult to take all these no white noise and say okay It's great. The model is okay This is one and the other is for for example, I have some I I work in this in this domain and I find difficults in How to say I find difficulties in Identified the anomalies in that are not identified previously For example, it's quite reasonably That you know that this piece is to broken because you know that this piece is there the soft the pain point in the system, but if you want to detect anomalies in Subsistence that you have no tack. It's quite difficult isn't it? Yes, yes, it can take the first one probably yeah the data cleaning So the model yet sure enough we introduced noise in the simulink model itself And then I also added a little here and there to kind of make sure that the signal processing was going to be robust enough Just you know random noise basically But I did for sure. I mean although the you know simulated data was was all existing You know thinking through like obviously we're gonna have missing timestamps or maybe a sensor is gonna drop out Go down. I mean some of that's taken care of a little bit in the Kafka part because they do some of the you know synchronization and everything But I also still need to think about all that so you know removing the missing data Rescaling you know all that kind of stuff sort of typical challenges Yeah, the the signal part there's some actually consulted with some of my colleagues even for that because you know We have some pretty sophisticated stuff. You know, I just kind of dabble in signal processing But they they you know showed different ways of introducing the noise And I think we showed the app a little bit and that actually is very helpful because you can kind of play around a little and see What kind of filters you might use? So yeah, it's a little trial and error like the real data would probably inform a little bit more But we try to make it as realistic just based on our prior knowledge from other kind of projects You know knowing sensors fail and let us life, right? the other Yeah, this is the so the second question was about was about real data and about anomalies, right? So certainly certainly the next phase in this project and in when we do these kind of projects the next phase is always to get real data Right and in many cases we can't you know, we still have to prove to the customer that that You know that that that We have to we have to basically make a value cell to the customer that says look getting real data is actually going to be really costly for you and In many cases showing a prototype like this is a way to kind of motivate that cost and get management behind it That's can be a really big challenge as you I'm sure you know indeed indeed. Yeah, for example the Sensor needed to detect some kind of anomalies. Yeah, they had Something frequency very low and it's like you have a filter that cannot detect this anomaly And it's like okay. It's like you put a hand In front of your eyes and you can't see the anomaly even if you can't mobilize it and it's quite for setting up these sensors are Very expensive right now, and they're not implemented in the industry. Mm-hmm. Yeah No, we've tried a little to implement probably the next phase some system identification stuff before even so it may be the The dashboard could have some Indications of other things that could be going on like you said, you know, we can kind of build out a little bit more Breath and what sort of anomalies could be detected even you know unsupervised learning or something to get some indicators I sure that'd be cool. Thanks. We'll put it on for the next sprint I have okay two last things to say one good and one bad which one you prefer first bad the bad the time is over Time's up. Okay, that good one. They want to continue ask answering questions So if you want if you like right outside go outside ask the Esper and they be really glad Heather and Jim to answer all your questions. So big applause for him for them Thank you Thank you so much