 So thank you all for coming to our session today. I think we can get started. So again, thank you for coming to attend our session titled AI ML Data Pipeline Processing with Go Microservices based solution at the edge using open source technology. So Sam and I, we both worked on this particular project together at Intel. We used Golang for developing our solution. And because this is a distributed solution deployed at the edge. So before we get started, few seconds on this slide on notices and disclaimers that inter legal team wants us to show this before we get started. Okay, so this is the content slide. Sam is gonna do the introduction first. And then I will cover the project objectives and architecture followed by Sam who will then go over the UI and the additional features we had for this project. And the important section which is the challenges and learnings from this project. And then finally I will end with the conclusion. So before we get started, small introduction on ourselves. My name is Neetu Elizabeth Simon and I work as a senior software engineer in the network and edge group at Intel Corporation. I am based out from Chandler Arizona. My partner in crime here is Sam and I'll hand over the mic to her. Thanks Neetu. So I'm Sam, I'm a software engineer at Diagrid where I work on some of our internal products as well as help contribute to the Dapper open source project. And yeah, we are work besties at Intel and we'll be covering our last project together. Which was an IoT based solution which brings us to the introduction. So you might be wondering, well, what is IoT? And I know it might be pretty obvious to some, but to demystify it, the internet of things is this web of numerous devices that are interconnected and interacting with minimal to no human intervention. So there are massive amounts of devices that are getting connected and collecting and sharing their data. Statista predicts that 75 billion connected things will be in use by the year 2025. So with this rise in the amount of devices that we're seeing, let's take a peek at some of this market growth. As of last year, the global IoT market was estimated to be valued at 14.4 trillion dollars. That's trillion with a T, which is a lot of money. And you know, even with this economic downturn that we're experiencing, IoT analytics is estimating the IoT market to continue growing another 19% this year. So we're seeing such growth thanks to a few different factors, such as new and innovative markets, like retail, healthcare, education, and so forth, as well as increasing industry support, such as companies like Intel, as well as strong drivers, thanks to cheaper and faster processors and wireless networks. So with this rise in IoT-based solutions, we're also seeing a growth in computer vision-based applications. Cameras are being termed the ultimate thing sensor, generating enormous amounts of visual data. So it's critical to derive sense and meaning from this data. Video along with computer vision and AI is being termed the I of IoT, which makes sense if you think about a camera lens, right? The I. And so for our presentation, we'll be covering a healthcare use case. And because it's at Intel, it's a pretty confidential project around healthcare. We have to be mindful of privacy concerns, so we'll keep it a bit more high level with respect to the exact use case. So you can think of it maybe like this brain scan image where you have an input image on the left and you're doing some processing to get the output results off to the right. And so we wanted to be mindful of keeping this solution scalable and flexible for different healthcare use cases. You can also think of it such as this image here for a radiology workflow where you have a healthcare device giving you some sort of scan or image-based result, going through a workflow of having some analysis done down to a prognosis and potential treatment. So it's a really impactful healthcare use case. And we focused on image anomaly detection and edge pipeline enablement. And because of that, there were a few different deployment pain points such as poor AI performance, considerations on a robustness of architecture, as well as a distributed deployment scenario, which we'll cover here in a bit. Now I'll pass it over to Neethi for our project objectives. Okay, so thanks Sam. So Intel is not into the business of selling software. What we do specifically, our team is to build these solutions which are open source available for everyone to use and it'll help our partners or SIs to build their own customized solutions on top of what we have developed. So the reference implementation is what we call it as or open source software samples. This particular project, what we have developed is an automated image transfer processing and comparison solution. It covers the entire end-to-end machine learning pipeline processing. The entire solution, as I mentioned, is written in Golang and it's containerized with dockers. So we have these machine learning pipeline processes which are AI-assisted and we process all these images, do the inferencing and then display the results. So at a very, very high level, this is how our solution looks like. You can see we have two devices here. On the left side, we have the OEM device which is running the Windows operating system. It is connected to an image transfer device. For example, a microscope can be a camera or any other medical device which is capturing these images. We have a bunch of services written in Golang again. These are executables which are running on the Windows machine and all these services are based on the EdgeX Foundry microservices. On the right side, you can see we have the Gateway which is running Linux and in our case, we were using Ubuntu here. We again have a bunch of services here which are written in Golang microservices and all of them are dockerized. All these applications again are written on top of the EdgeX application. The database we are using here is Redis and then we have a machine learning processing component which actually helps us to run all these pipelines, machine learning pipelines and do the inferencing and give us the result back. So you can see we have an MRI image of the brain here which is now being captured by the OEM device and then it's been automatically transferred from the OEM into the Gateway where the machine learning processing happens and it gives out the output image there. So now let's get into the architecture side before going into too much details, some of the terminology that we are using here. So what is the OEM? OEM device is the one which is connected to an image capturing device and as I mentioned, it can be a microscope, can be a camera or just any other medical image capturing device, Gateway. Now Gateway is the one which is actually running all these machine learning pipelines. So it's a high computer device which is running at the Edge and getting all the outputs from the pipelines. Now what do you mean by a pipeline? So it is a way to represent the end-to-end machine learning pre-processing and inferencing of the input images. So usually machine learning solutions have like no stages of data collection, then cleaning and then you have training and processing. What we are focusing here is that last stage which is the inferencing piece. So right before inferencing all the pre-processing that happens and then the inferencing that happens, everything is happening on the Gateway here. Two specific terms which are to our project is jobs. So jobs helps us to process these input images and track them. So what it does is it tracks the movement of the input file between these different devices and different microservices and any results or outputs that are produced by processing these machine learning pipelines. And then we have tasks. So task is what will match the jobs to a particular pipeline which needs to be run. So we have an input image which is a particular job that is associated to a particular pipeline and the component that is used to do this is called as the task here. So in the next couple of slides I'll cover some of the main components that we have used in our solution. The first important one is EdgeX. So EdgeX Foundry is a project, open source project, hosted on the LF edge. It is a framework used for IoT Edge compute. It gives us a lot of application services. We have microservices. We have application services connected to the message bus. We have device services and everything. So mostly all these application services are built on Golang. And this is our foundational microservices based framework that we are using in our solution. The next one, storage, we are using Redis. That is, again, as you all know, is an open source database in memory, no SQL database. The main reason we are using is because it's already available with EdgeX. It's been integrated, seamlessly integrated with EdgeX. So it makes it really easy for us to integrate our solution with EdgeX and Redis. The main data that we are storing in our database is all the details regarding jobs and tasks. The next important component we have in our solution is the machine learning pipeline management. So two important tools we are using here is the Project Air. And then we have the Intel distribution of OpenVINO Toolkit. So what is Project Air? Project Air is, again, a open source tool which was developed by Tipco Labs. It is used to build, configure, and modify the different machine learning pipelines. It, again, uses EdgeX Foundry, so it made it really easy for us to integrate with that Tipco Project Air for the entire end-to-end solution development. And then lastly I have here is the Intel distribution of OpenVINO Toolkit. This is, again, our open source tool developed by Intel. It is used for optimizing the inferencing on the Intel platform. So these two together help us to build these machine learning pipelines and then execute all of them on the gateway side. So at a very high level, a little bit more details is our architecture diagram here. I will be covering this at the later stages as well. But what I want you to focus on is at the bottom, the legend. So all the dark blue-colored boxes are basically our microservices. The light blue-colored ones are the EdgeX application services. So the difference between these two is the light blue-colored one directly talks to the EdgeX message bus, whereas the dark blue-colored one, not. And then we have the purple-colored boxes. These are the EdgeX components. So you can see the redness, which we're just using it. We have the EdgeX MQTT broker, which is our message bus. We have console for configuration management, and then we have EdgeX core metadata for device management. And then we have a black box. You can see that's the pipeline execution piece, which has, as I mentioned, we have two components, which is helping us do the pipeline execution here, which is the Intel OpenVINO and the project TIPCO AIR. You can see the connections, different connections we have. We have TCP requests to the databases. We have the EdgeX MQTT connections, and then we have HTTP REST calls here. We also have some file system operations. So you can see the green boxes, one on the OEM side and the other on the gateway side. So first, going over the microservices we have on the OEM side. So as I mentioned, the image capturing device is gonna capture the image. Can be from a microscope, camera, anything. And then it gets stored on the input folder of the OEM. So that's the stage one. We have a microservice called FileWatcher, which is sitting and watching that particular folder. It sees that there is an image. It contacts the data organizer microservice. That organizer, this microservice is gonna contact another microservice on the gateway. It's called JobRepository. So that microservice basically checks that have you seen this file before? Or is there a pipeline that needs to be run for this particular image? If so, right, it sends a call back and then it tells this file sender OEM microservice to basically copy this file from the input side and then send it to the gateway side. And then we have some processing which happens. We get the results back. Results can be just like JSON message or it can be output files. And then we have this file receiver OEM, which will make a request to the gateway to send us back the output files. So the gateway then sends all these output images that are generated from that machine learning processing back to the OEM device on the output side. So now we have all the microservices which are running on the gateway side. So as I mentioned before, we have the OEM first contacting the gateway through that JobRepository microservice. So it will create a job and then no store it on the Redis. Next, we have a microservice called TaskLauncher, which is actually gonna check if there is any pipeline associated which can be run on this particular input image which we are seeing. If so, right, it will send a request to the OEM and then you can see we have the file receiver gateway microservice which will accept that input image that is being sent from the OEM to the gateway. And this particular microservice will then save it into the input folder on the gateway side of it. So now the TaskLauncher is the one which will send all these details about the input image, the pipelines and everything to this pipeline execution module through the MQTT broker. And this is where the actual inferencing happens, right? We feed the input image into this pipeline, into these machine learning pipelines. It runs all the inferencing for the different models we have and then we have OpenVINO and Tipco Project Air which will help us execute all these pipelines and generate the output back. So once the output is generated back, we have the file receiver on the OEM side which contacts the gateway telling that please send me back all the files, right? And the file sender gateway you can see here sends everything back to the OEM. So we have set up the architecture in this way because we had some firewall restrictions that only the OEM can make a request to the gateway and not the other way around. So this is why our setup is done in this way. So in the next slide, we have an animation which actually shows how the files are moving between the devices, how the jobs and tasks are also being getting executed. It is a little fast, so I'll try to follow up with the animation there. Okay, so, yeah, you can see the file being captured. It's copied onto the input system. OEM contacts the gateway. It's checking in the deaders if we have any associated pipelines to run for that input message. And then you can see we have the file sender OEM microservice which is sending that input file to the gateway, right? And the file receiver actually saves the file on the input folder on the gateway side. Now, task launcher gives all these details to the pipeline execution module. And then this is where the inference happens, right? So the pipeline gets executed. We have a bunch of output files being generated and it's stored on the output folder on the gateway. So now you can see we have the communication happening between the gateway and the OEM. And the file receiver microservice is actually asking for the output files to be sent back to the OEM. So you can see it gets sent back and then it's stored on the output folder on the OEM end, okay? So the next section is the UI and features and I'll let Sam cover that. Thanks, Viteo. So we had a few different UIs going on for this project. The first of which is the project air UI and that was for our project air composability and deployment of our pipelines. So you can see there's a bunch of different steps going on here for our Python pipelines including some broadcasting, some error handling and so forth. So it was a nice UI to create your pipelines and then you could click that button to deploy a pipeline. So from there, we had Portainer to where we could see all of our Docker components going on. And so whenever you deployed a project air pipeline you could see a Docker container start for that pipeline processing. So it was nice to be able to see that resource being created and for us to be able to work with it. And so on top of that we had an Angular web UI. So we know that to be our TypeScript based free open source web application framework and it's something that was pretty commonly used within Intel. For that we had Task creation, job monitoring and observability and actually internationalization which how many of y'all have done internationalization on a web UI before? Okay, a decent handful you know. So it's kind of interesting, I don't know not the most common sometimes. So pretty interesting. So here you can see our landing page for tasks. You can see some general information on the task and I'll go ahead and click that add task button. So here you see some of the different fields for our tasks where you can specify a description and which pipeline you want to run. So these pipelines are auto-propagated so it's all dynamic based on the pipelines we deployed from our project air UI. So you can have an only file pipeline, you can have a multi-file output processing pipeline and so forth. We also had some specifications where you could say hey, if this input image matches exactly test image.tif run this pipeline or if it's like a prefix or a suffix then run this pipeline. So we added some modularity into here as well as a way to specify model parameters. From there again you could see the task we just created and you could see that we can filter things and we wanted to add in some pagination to keep in mind scalability because you would get a lot of images potentially, right? So we wanted to keep that in mind. From there we have some processing going on and you can follow that throughout the system using this jobs landing page. So you can see here a few extra fields going on and that's because we had a complex architecture and so we wanted to get some insights into where is the processing occurring, like where is an issue occurring in the case that there might be a network issue or transient issue. So we had some pipeline status information, job status information, information on our files such as the input files as well as the output files. So you could see those output file results if you clicked on the blue box. Speaking of those files, Nithi's mentioned we've had this two system set up. So here's kind of another visualization going on here. So the top represents that OEM machine where you have the input directory with your images going into. We copy that over to the gateway device, right? So that way we can apply the processing on those input images. That then goes to the bottom right where you have your output files generated and then our system copies that over back to the OEM device. And then we had an archival process just to be mindful of our resource consumption. Observability was a bit more vanilla. So we did have again this two systems and we used your typical TIG stack, right? T is telegraph, I for influx for the storage and then G for Grafana. We had telegraph on both machines so we could get those system health metrics on both devices. Log analytics was also a bit more vanilla. We had a variation of the elk stack. Elk, right, is elastic search. Elk for log stash, K for cabana. Instead of log stash though, we actually used file beat which hits home because that's written in go and I love go. So it's really efficient and it actually removed the JVM module requirement that log stash introduced. And so we wanted to minimize additional software requirements needed for the project. So that's why we went with file beat instead. So again it was installed on both machines and then we had elastic search in cabana for those visualizations. Here's one of the fun bits. This is the internationalization. So how many letters is an internationalization? Y'all counting? It's 18 and that's what the go I18N kind of stands for. So I18N represents internationalization. So we used the go I18N package to localize some of those pipeline status and job status related information as well as some of our error detail information. And so what that meant is in our go microservices, we had to add a language bundle, load our translations that we had to work with an internationalization team on to give us those Chinese translations. We created a localizer for our handlers and then as soon as we got that except language header and it matched to Chinese, we would respond back with that data, that translated data. We also had an integration testing setup which had a few different components. So we wanted a way to set up our test dependencies very easily and we were using Docker compose on the gateway side. And so for that we ended up using test containers for a go package. So that made it a homogenous way for us to spin up our services just normally as well as in a testing environment. We also used HTTP expect which is just another go open source package for doing those restful calls on our services that were up and running for our tests. Lastly, we had a release requirement for having a test report. And so for that we used the go test report package, another open source go package. All right, we'll shift gears into the challenges and learnings because we all know we have them and they can be pretty fun sometimes. So one of the big things here was a distributed deployment scenario. We had additional hardware that the team had. So we all had multiple machines all over our desks and yeah, we didn't have much space for this project. And so because we had some services going on our Windows devices, we had to look at how do we deploy to a Windows box. For that we ended up, or we started looking into go MSI. Just another go package and this is a go package to generate an MSI package for your go microservices. And MSI is an acronym inside to demystify that. It stands for Microsoft Software Installer. So it's a Windows installer format so that way you can install Windows applications or update those Windows applications and so forth. The downfall here is it did require additional components which we didn't like, which included Wix, which is another acronym that stands for Windows installer tool set, which also meant we had to manually go and add that to our path on the Windows machine, which was kind of a pain and it was just additional steps we didn't like. So we ended up switching to a zip folder of .bat files. And so .bat is just a batch file. So it's just the script file that stores commands and executes them in a serial order. And so that's what we ended up switching to for the deployment of our Windows microservices. And so because of this, we did have different deployment steps per machine, which was also a bit more involved. So you can see here, it's kind of similar but kind of different for both devices. So our OEM device services, we would have to build those OEM targets, copy them over to a zip folder on a USB stick, then plug that into our OEM device and copy the zip folder over and then use PowerShell to start our services using that .bat file or script. On the gateway side, kind of similar again, but different. So we had to build those gateway targets, generate our build using Docker, grab that host IP and then use our make targets to start our gateway devices. Sorry, gateway services. So again, kind of similar but kind of different and kind of made it a pain to test each other's PRs because we had to be thorough, right? And hold each other accountable, but there were just all these additional steps. And it also made it a pain when it came to automation. So when you think about end-to-end testing and you have two devices, I just meant twice the work. And so, you know, our team actually had never had to set up a Windows runner before and we were actually using Jenkins at first for this project. But then we were talking about open sourcing the project and so we started looking into GitHub Actions and that was a bit more appealing. So we ended up switching to GitHub Actions and there was still that learning curve of setting up a Windows GitHub Actions runner, but it made it a little less work for us down the road. But there was still also the pain point of integrating that with our Linux microservices in our automated testing. So a bit extra work here. Something else we were also mindful of is the service weight strategy. Because we had services going on for two machines, there were all of those what-ifs. What if there's a transient issue? What if there's a network issue? What if one microservice has trouble communicating with the other or it's not up and ready yet for the processing to occur like it expects? Well, we had these concerns at deployment time as well as testing time. So we wanted some sort of a homogenous setup for both cases. And we had to be mindful because we had a lot of services going on here. So it wasn't the simplest of architectures. So we looked at where can we apply this logic? And you see this quite frequently in the Docker layer where you'll add a wrapper on your Docker command or entry point. Most frequently I've seen the Vishnubab bash script for this. But, you know, all of our services weren't using Docker. So that didn't make the most sense for us. So we left, where else can we do this? And so that brought us to our go application code. And so for that, we use the wait for it go package, which is where you wait for that TCP host import connection. So we could apply that into all of our main.gos and wait for our dependent services to be up and running before starting that service. And it was fine. But more challenges because, so who all has experience with go here? Okay, a good number of you. So you'll probably know this, right? But this wait for it go package was using package main. And so what that meant was we couldn't then consume it. We had to copy paste that code, give attributions, of course, and then package it up so that we could consume it. So it was just this extra duplication that we were lugging with us. And so we did contribute back to that open source project on the PRs out there. But that did give us that consistent wait for us to wait for our other services to be up and ready before proceeding. So we also had some observability fun. Our org historically used inflex DBV1. But we thought we were hot shots using inflex DBV2. And so that made a few other changes, right? So typically with inflex QL, you use the inflex DB SQL like query language. So with using V2, that meant flex style query. Which, you know, that's fine, that's all right. It's a cutting edge, it sounds good. But if you think about it, it was a bit of the bleeding edge where it was almost painful because this was it. Like these were all of the dashboards that we had readily available to use. Just five. So again, it was a bit painful because we were restricted because we were using that cutting edge technology here. And if I zoom in on one of these dashboards, it looks all right. But if I read here, it says this is a copy of a dashboard I found here. Modified to work with flux syntax. So for inflex DBV2. But it says here, I have no idea if everything is correct and it could use work. Left a little bit to be desired with respect to confidence and like feeling really good about using this dashboard. But again, we only had five. So what else are we gonna do, right? So this is what we went with and it worked, it was fine. We got to accomplish the sprint task and it did its job. But again, kind of some learnings when you are looking to use the latest and greatest versions and you're feeling like a hotshot. But you gotta be mindful of, well, what does that mean down the road, right? So one last thing before I pass it back to Nathan, we did have some item potency issues. And this was with respect to those end-to-end tests. I actually think this is kind of more common of a problem than I think we all talk about. And what I mean here is, we were passing file information back and forth, checking statuses at different points in the processing. And so I had a beefy laptop. So I felt really nice and good and the tests were really reliable from my machine. But then for an ethue, she didn't have as good of a laptop as I did. And so the timing was not as consistent in passing for her. So for those end-to-end tests, she would up the time a little bit just to see if it would pass. And it would, but she had to adjust the timing. So it wasn't the best experience on our team because we did have some flaky test cases, right? And I think, again, it's more common than we all talk about, unfortunately. So for this one, I don't have a, here's our answer because I don't have the answer. It was just something we all kind of knew was a thing going on in our project and we gave us a little buffer on those sleep times and we tried to figure out a few other ways of like giving us that extra buffer wiggle room, especially when it came to our automated testing. But we just kind of squinted our eyes and it kind of worked. So no answer there, but again, a good learning to keep in mind. So now I'll pass back to Neethu for the last bit. Thanks, Sam. So this was the most interesting challenge for us because we had to make a project shift here. So as I mentioned earlier, we were working with our partners, Tipco. And then we had done all our project, released everything by end of last year. And then we got to know that Tipco is getting acquired by another company called Citrix and then they are shelving this entire project. So you can see here in August of last year, after that, there has been no contributions or the entire maintenance on this project stopped. So now we had to make the immediate shift on what needs to be done. And that's how I'll end our presentation today. Our team currently is working on replacing all these project air components with other open source tools. So the biggest feature that Project Air gave us was that pipeline composability and deployment stage. So we have found out other tools which will help us do this and all this is again open source. So currently we are working on integrating all these tools, our project, and then release everything as open source sample by end of this quarter. So that is what we are working on currently. Once it's released, it will be available on open.intel.com for anyone to just download and work with it. And then the future for this project is basically we are looking at other use cases, how do we scale it and how do we bring in more features into this particular project. So that's all we have. Thank you and we can take questions if we have time. I think we have. A few minutes. Yeah. There are any questions? Don't be shy. Thank you for presentation. You guys actually build this stuff. So where are you guys hosting it? And what are the, I guess I missed it. When you actually deployed it to production or pre-production environment, what was the performance? And also, why did you decide to go with microservices? I'm booked for it, but I mean, maybe you heard about AWS and kind of going back. Any kind of headaches you got from doing microservices, maybe you thought at some points like, well, maybe you shouldn't have done it or some other experience. So kind of three questions there. Thank you. So first question if I remember was where do we host it? So once it's released, it will be available on open.intel.com. Yeah. The second question was, was it the performance? Yeah, the performance once, I mean, whatever pre-production environment you have right now, what's the actual performance there? Compared to your tests, compared to your local machines. Right. So are you asking about the infencing performance or? Overall, yeah. We don't necessarily like have hard numbers on this because this was kind of getting out a proof of concept and working with some proprietary models and getting everything kind of hooked together. I'm sure that's one of the upcoming stages. That's the next goal for us because we have to make sure it is working the best on the Intel hardware. And the major component that we are using is that Intel OpenVINO toolkit, which optimizes it and runs all these machine learning pipelines, the fastest it can. So you're basically not at the fine tuning stage yet? Yes. I mean, it's already fine tuned with the OpenVINO toolkit that we are using. We know that. It's just a matter of now getting the numbers out. How much has, is it improved? Because of using that tool specifically on an Intel hardware. And with respect to question three in terms of thinking on the monitor. Why microservices? Yeah, versus microservices. Yeah, I mean, some of our microservices like we had file receiver gateway, file receiver OEM, file center gateway, file center OEM. So there was some duplication in terms of our logic and yeah, just in terms of our logic. And again, this is a distributed edge deployment. If you can see, it's two different devices. So we did not want it to go with that monolithic architecture, make it as modular as possible, reuse the components, just like how we use the EdgeX Foundry as our base application service. We just reused them. That was possible only because it's a microservice. I mean, that's one of the advantages, right? Yeah, yeah. All right, thank you. Thank you for the questions. Anyone else? Yeah. Okay, performance-wise, how well does this microservice setup compare to something like a very strict monolithic architecture? Are you referring to like a user experience-wise? Like how? No, just for like straight performance. That's a great question. Again, we don't have hard numbers for this. So maybe next year, we will have a better answer for you. Awesome, thank you. Yeah, we'll be here. Oh, we'll be here with the second part of the project to talk about it on the open.inter.com. Anyone else? Okay. Really interesting presentation. And I guess I was wondering, since you'd have multiple devices feeding in data to the system, I'd assume with like lots of different units and data formats and structures. So without disclosing any confidential information, could you talk about strategies that the system uses for say, I guess organizing different data structures, pre-processing them, like sending them to inference, the inference pipelines and things like that? Yeah, so the main reason we are using EdgeX is for that very reason because at the edge, you will have different devices. You need to collect different data from all these different devices. How do you manage it? How do you manage cameras? Cameras may have their individual configuration setup and all of that. And this is where we use EdgeX. So that's the biggest plus point because EdgeX comes with all these features that we could just rely on so that our solution can work anywhere outside the edge. And we templated out like certain things, right? So we would support different file extensions for those input images. So it was very configurable with our setup. And that's one of the things we wanted to keep in mind is like how to make sure this can expand use case-wise input device-wise, right? Like different file extensions, formats and so forth. So like even on the model parameter side of things and the UI, you could specify certain parameters. So we try to think about templating. The matching thing, if you remember, the task will match a particular input to a particular pipeline, right? All of that is pretty abstract. Like it can work with any input image, may not be a TIFF file, which comes from medical images, usually use TIFF files. Yeah. So does the system specialize in image data? Yes. I mean, our use case is very specific to medical images. Thanks. Thanks for the question. Anyone else? Yeah, just a question. So you have your ML pipeline. It's not clear to me if the intent is for a lot of training data to be fed into it, to train the model, or to have a trained model and then have a single image from your OEM go and de-analyze by the existing model. Yeah, so this is the second stage. And I did mention, so when we say machine learning solution, right? It basically has four stages. It can be applied to any kind of data. First is you collect the data. Second is you pre-process it, right? Or do the annotations. Third step is the training. And then fourth step is inferencing. And inferencing happens at the edge. And this is where this solution is focused on. It's at the deployment stage, you already have a model. We are working with our data scientists. They give us the model. It's already there. You have to go into the field, deploy it, and then make everything run. That is the last stage where Intel specializes with inferencing. So your solution doesn't have to worry so much about large data throughput. Just a single, yes. Right, thank you. Anyone else? Cool? Thank you. Thanks.