 Hi everyone, thanks for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled, Toward Zero, Unplanned Down Time of Medical Imaging Systems Using Big Data. My name is Sue LeClaire, Director of Marketing at Vertica, and I'll be your host for this webinar. Joining me is Maro Barbieri, Lead Architect of Analytics at Phillips. Before we begin, I want to encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. There will be a Q&A session at the end of the presentation, and we'll answer as many questions as we're able to during that time. Any questions that we don't get to, we'll do our best to answer them offline. Alternatively, you can also visit the Vertica forums to post your questions there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also, a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And yes, this virtual session is being recorded and will be available to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Maro, over to you. Thank you. Good day, everyone. So medical imaging systems such as MRI scanners, interventional guided therapy machines, CT scanners, DXR system, they need to provide hospitals optimal clinical performance, but also predictable cost of ownership. So clinicians understand the need for maintenance of these devices, but they just want to be non-intrusive and scheduled. Whenever there is a problem with the system, the hospital suspects Philips services to resolve it fast and have the first interaction with them. In this presentation, you will see how we are using big data to increase the uptime of our medical imaging systems. I'm sure you have heard of the company Philips. Philips is a company that was funded 129 years ago in actually in 1891 in Eindhoven in the Netherlands, and it started by manufacturing light bulbs and other electrical products. The two brothers, Gerard and Anton, they took an investment from their father, Frederick, and they set up to manufacture and sell light bulbs. And as you may know, a key technology for making light bulbs was glass and vacuum. So when you're good at making glass products and vacuum and light bulbs, then there is an easy step to start making radiovolts like they did, but also x-ray tubes. So Philips actually entered very early the market of medical imaging and healthcare technology. And this is what is at our core as a company, and it's also our future for healthcare. I mean, we are in a situation now in which everybody recognizes the importance of it. And we see incredible trends in a transition from what we call volume-based healthcare to value-based, where the clinical outcomes are driving improvements in the healthcare domain, where it's not enough to respond to healthcare challenges, but we need to be involved in preventing and maintaining the population wellness. And from a situation in which we episodically are in touch with healthcare, we need to continuously monitor and continuously take care of populations. And from healthcare facilities and technology available to a few elected and rich countries, we want to make healthcare accessible to everybody throughout the world. And this, of course, as poses incredible challenges. And this is why we are transforming Philips to become a healthcare technology leader. So from Philips has been a concern realizing and active in many sectors and realizing all kinds of technologies, we've been focusing on healthcare. And we have been transitioning from creating and selling products to making solutions to address these healthcare challenges. And from selling boxes to creating long-term relationships with our customers. So if you have known the Philips brand from Shavers, from televisions and light bulbs, you probably now also recognize the involvement of Philips in the healthcare domain, in diagnostic imaging, in ultrasound, in image-guided therapy and systems, in digital pathology, known invasive ventilation, as well as patient monitoring, intensive care, telemedicine, but also radiology, cardiology, and oncology informatics. Philips has become a powerhouse of healthcare technology. To give you an idea of this, these are the numbers from 2019, about almost 20 billion sales, 4% comparable sales growth with respect to the previous year, and about 10% of the sales are reinvested in R&D. This is also shown in the number of patents rights. Last year, we filed more than 1,000 patents in the healthcare domain. And the company is about 80,000 employees active globally in over 100 countries. So let me focus now on the type of product that are in the scope of this presentation. This is a Philips Magnetic Resonance Imaging Scanner, also called Ingenia, 3.0 Tesla. It's an incredible machine, apart from being very beautiful, as you can see, it's a very powerful technology. It can make high-resolution images of the human body without harmful radiation. And it's a complex machine. First of all, it's massive. It weights 4.6 thousand kilograms, and it has a superconducting magnet cooled with liquid helium at minus 269 degrees Celsius. And it's actually full of software, millions and millions of lines of code, and it's occupied three rooms. What you see in this picture, the examination room, but there is also a technical room, which is full of equipment, of custom hardware, and a machinery that is needed to operate this complex device. This is another system. It's an interventional guided therapy system where the X-ray is used during interventions with the patient on the table. You see on the left, what we call a C-arm, a robotic arm that moves and can take images of the patient while it's been operated, is used for cardiology intervention, neurological intervention, cardiovascular intervention. There's a table that moves in very complex ways, and again, it occupies two rooms. This room that we see here, but also a room full of cabinets and hardware and computers. This is another characteristic of this machine is that it is used during medical interventions. So it has to interact with all kinds of other equipment. This is another system. It's a computer tomography scanner icon, which is unique due to its special detection technology. It has an image resolution up to 0.5 millimeters, making 1,000 by 1,000 pixel images, and it is also a complex machine. This is a picture of the inside of a comparable device, not really an icon, but it has a gantry rotating which weighs 2.5 ton. So it's a combination of X-ray tube on top, high-voltage generators to power the X-ray tube, and an array of detectors to create the images. This rotates at 220 right per minute, making 50 frames per second to make 3D reconstruction of the body. So a lot of technology, complex technology, and this technology is made for this situation. We make it for clinicians who are busy saving people's lives, and of course they want optimal clinical performance, they want the best technology to treat the patients, but they also want predictable cost of ownership. They want predictable system operations, they want their clinical schedules not interrupted. So they understand these machines are complex, full of technology, and these machines may have, may require maintenance, may require software updates, sometimes may even fail and require some parts, hardware parts to be replaced, but they don't want to have it unplanned. They don't want to have unplanned downtime. And they would hate having to send patients home and to have to reschedule visits. So they understand maintenance, they just want to have a schedule predictable and non-intrusive. So already a number of years ago we started a transition from what we call reactive maintenance services of these devices to proactive. So let me show you what we mean with this. If a system has an issue, a system on the field, a traditional reactive workflow would be that the customer calls a call center, reports the problem. The company service in the device would dispatch a field service engineer. The field service engineer would go on site, do troubleshooting, literally smell, listen to noise, watch for lights, for blinking lights or other unusual issues, and would troubleshoot the issue, find the root cause, and perhaps decide that a spare part needs to be replaced. They would order a spare part. The part would have to be delivered at the site either immediately or the engineer would need to come back another day when the part is available, perform the repair. That means replacing the part, do all the needed tests and validations, and finally release the system for clinical use. So as you can see, there are a lot of steps and also an over of information from one to between different people, between different organizations even, wouldn't be better to actually keep monitoring the install base, keep observing the machine and actually based on the information collected detect or predict even when an issue is going to happen and then instead of reacting to a customer calling, proactively approach the customer scheduling preventive service and therefore avoid the problem. So this is actually what we call proactive service, and this is what we have been transitioning to using big data, and big data is just one ingredient. In fact, there are more things that are needed. The device is themselves, they need to be designed for reliability and predictability. If the device is a black box, it does not communicate to the outside world its status. If it does not transmit data, then of course it is not possible to observe and therefore predict issues. This of course requires a remote service infrastructure or an IOT infrastructure, as it is called nowadays, the possibility to connect the medical device with a data center in enterprise infrastructure, collect the data and perform the remote troubleshooting and the predictions. Also the right processes and the right organization is to be in place because an organization that is waiting for a customer to call and then has a number of field service engineers available and a certain amount of spare parts in stock is a different organization from an organization that actually is continuously observing the install base and is scheduling actions to prevent issues. And another pillar is knowledge management. So in order to realize predictive models and to have predictive service action, it's important to manage knowledge about failure modes, about maintenance procedures very well to have it standardized and digitalized and available. And last but not least, of course the predictive models themselves. So we talked about transmitting data from the install base on the medical device to an enterprise infrastructure that would analyze the data and generate predictions that predictive models are exactly the last ingredient that is needed. So this is not something that I'm telling you for the first time, it's actually a strategic intent of Philips where we aim for zero unplanned downtime and we marketed that way. We also, it's not a secret that we do it by using big data. And of course there could be other methods to achieving the same goal. But we started using big data already quite many years ago. And one of the reasons is that our medical devices already are wired to collect lots of data by the functioning. So they collect events, arrow locks that are sensor connecting sensor data. And to give you an idea, for example just as an order of magnitude of size of the data, the one MRI scanner can log more than one million events per day, hundreds of thousands of sensor readings and tens of thousands of many other data elements. So this is truly big data. On the other hand, this data was actually not designed for predictive maintenance. You have to think a medical device of this type stays in the field for about 10 years, some a little bit longer, some a little bit shorter. So these devices have been designed 10 years ago and not necessarily during the design and not all components were designed with predictive maintenance in mind, with IoT. And with the latest technology at that time, perhaps we were not so forward looking at that time. So the actual, the key challenge is taking the data which is already available, which is already logged by the medical devices, integrating it and creating predictive models. And if we dive a little bit more into the research challenges, this is one of the challenges, how to integrate diverse data sources, especially how to automate the cost of data provision and cleaning. But also, once you have data, let's say, how to create these models that can predict failures and the degradation of performance of a single medical device. Once you have these models and alerts, another challenge is how to automatically recommend service actions based on the probabilistic information of these possible failures. And once you have the insights, even if you can recommend actions still, recommending an action should be done with the goal of planning maintenance for generating value. That means balancing costs and benefits, preventing unplanned down times without, of course, scheduling unnecessary interventions. Because every intervention, of course, is a disruption for the clinical schedule. And there are many more applications that can be thought of such as the optimal management of spare parts supplies. So how did the approach of this problem? Our approach was to collect into one database, Vertica, a large amount of historical data. First of all, historical data coming from the medical devices. So event logs, parameter values, system configuration, sensor readings, all the data that we had at our disposal, that in the same database together with records of failures, maintenance records, service work orders, part replacement contracts. So basically the evidence of failures. And once you have data from the medical devices and data from the failures in the same database, it becomes possible to correlate event logs, errors, signals, sensor readings with records of failures and records of part replacements and maintenance operations. And we did that also with a specific approach. So we created integrated teams and every integrated team had three figures, not necessarily three people. There were actually multiple people, but there was at least one business owner from the service organization. And this business owner is the person who knows what is relevant, which use case is relevant to solve for a particular type of product, for a particular market. What basically is generating value or is worthwhile tackling as an organization. Then we had data scientists. Data scientists are the one who actually can manipulate data. They can write the queries, they can write the models, they know about statistics, they can create visualization, and they are the ones who really manipulate the data. Last but not least, very important is subject matter experts. Subject matter experts are the people who know the failure modes, who know about the functioning of the medical devices. Perhaps they have even designed, they come from the design side or they come from the service innovation side or even from the field. People who have been servicing the machines in real life for many, many years. So they are familiar with the failure modes, but also familiar with the type of data that is locked and the processes and how actually the systems behave if you allow me in the wild, in the field. So the combination of these three figures was a key because data scientists alone, just statisticians basically or people who can only do machine learning, they are not very effective because the data is too complicated. The failure modes are too complex. So they would spend a huge amount of time just trying to figure out the data or perhaps they will spend their time in tackling things that are useless because a subject matter expert knows much quicker which data points are useful, which phenomenon can be found in the data or are probably not found. So the combination of subject matter experts and their scientists is very powerful. And together, guided by a business owner, we could tackle the most useful use cases first. So these teams set up to work and they developed three things mainly. First of all, they develop insights on the failure modes. So by looking at the data and analyzing information about what happened in the field, they find out exactly how things fail in a very pragmatic and quantitative way. Also they, of course, set up to develop the predictive model with associated alerts and service actions. The predictive model is just not an alert, it's just not a flag. It's just not a flag, only a flag that turns on like a traffic light, you know. But there's much more than that. It's such an alert is to be interpreted and used by a highly skilled and trained engineer, for example, in a call center, who needs to evaluate that alert and plan a service action. Service action may involve the ordering and replacement of an expensive part. It may involve calling up a customer hospital and scheduling a period of downtime to replace a part. So it has an impact on the clinical practice, could have an impact. So it is important that the alert is coupled with sufficient evidence and information for such a highly skilled and trained engineer to plan the service action efficiently. So it's a lot of work in terms of preparing data, preparing visualizations, and making sure that all the information is represented correctly in a compact form. Additionally, these teams develop, get insight into the failure modes and so they can provide input to the R&D organization to improve the products. So to summarize this graphically, we took a lot of historical data coming from the medical devices, from the history, but also data from relational databases, where the service work orders were, the partner placement, the contact information. We integrated it and we set up to the data analytics. From there, we don't have value yet. Only value starts appearing when we use the insights of data analytics, the model, on live data. When we process live data with the module, we can generate alerts and the alerts can be used to plan the maintenance and the maintenance. Therefore, the planned maintenance replacing downtime is creating value. To give you an idea of the type of, I cannot show you the details of these models, of these predictive models, but to give you an idea, this is just a picture of some of the components of our medical devices for which we have models, for which we cover failure modes. Hardness, clinical grade monitoring, monitors, x-ray tubes and so forth. This is for MRI machines, a lot of custom hardware and other type of amplifiers and electronics. The alerts are then displayed in a dashboard, what we call the remote monitoring dashboard. We have a team of remote monitoring engineers that basically surveys the install base, looks at this dashboard, picks up these alerts and an alert, as I said before, is not just one flag, it contains a lot of information about the failure and about the medical device. The remote monitoring engineer basically pick up these alerts, they review them and they create cases for the market's organization to handle. So they see an alert coming in, they create a case so that a particular call center in some country can call the customer and schedule and make an appointment to schedule a service action. Or it can add a preventive action to the schedule of a field service engineer who is already supposed to go to visit the customer, for example. This is an eye-level picture of the overall data processing architecture. On the bottom we have the install base. The install base is formed by all our medical devices that are connected to our Philips remote service network. Data is transmitted in a secure way to our enterprise infrastructure where we have a so-called data lake, which is basically an archive where we store the data as it comes from the customers. It is scrubbed and protected. From there we have processes, ETL, extract, transform and load, that in parallel analyze this information, parse all these files and all this data and extract relevant parameters. The reason is that the data coming from the medical device is very verbose in legacy formats, sometimes in binary formats, in strange legacy structures. And therefore we parse it and we structure it and we make it basically usable by the data science team. And the results are stored in a Vertica cluster in a data warehouse. In the same data warehouse where we also store information from other enterprise systems from all kinds of databases from SQL, Microsoft SQL Server, TheraDataSub from Salesforce applications. So the enterprise IT system also are connected to Vertica. The data is inserted into Vertica. And then from Vertica the data is pulled by our predictive models, which are Python and R scripts that run on our proprietary environment helps with insights. From this proprietary environment we generate the alerts, which are then used by the remote monitoring application. This is not the only application. This is the case of remote monitoring. We also have applications for optical remote service. So whenever we cannot predict an issue from happening or we cannot prevent an issue from happening and we need to react on a customer call, then we can still use the data to very quickly troubleshoot the system, find the root cause and advise for the best service action. Additionally, there are reliability dashboards because all this data can also be used to perform reliability studies and improve the design of the medical devices and is used by R&D. And the access is with all kinds of tools. So Vertica gives the flexibility to connect with JDBC, to connect dashboards using Power BI, to create dashboards and click view or just simply to use R&Python directly to perform analytics. So little summary of the size of the data for the minement. We have integrated about 500 terabytes, more than 300 tables, about 33 million data points, more than 80 different data sources for our complete connected install base, including our customer relation management system SAP. We also have connected, we have integrated data from the factory, from repair shops. This is very useful because having information from the factory allows to characterize components and devices when they are new, when they are still not used, so we can model degradation, excuse me, predict failures much better. Also, we have many years of historical data and of course 24-7 life fits. So to get all this going, we have chosen very simple designs from the very beginning. This was developed back in the first system in 2015. At that time, we went from scratch to production in eight months. And it's also a very stable system. To do achieve that, we apply what we call exhaustive error handling. When you process a thing, most of the people attending this conference probably know when you are dealing with big data, you have probably you face all kind of corner cases, you thought it would never happen. But just because of the sheer volume of the data, you find all kind of strange things and that's what you need to take care of if you want to have a stable platform, a stable data pipeline. Also, a runtime characteristic is that we need to handle live data, but also we need to be able to reprocess large historical data sets because insights into the data are getting generated all the time by the team that is using the data. And very often, they find not only defects, but also they have change requests for new data to be extracted, to be extracted in a different way, to be aggregated in a different way. So basically, the platform is continuously crunching data. Also, components have built-in monitoring capabilities, transparency build trust by showing how the platform behaves. People actually trust that they are having all the data which is available. Or if they don't see the data or if something is not functioning, they can see why and where the processing has stopped. A very important point is documentation of data sources. Every data point has a so-called data provenance fields that is not only the medical device where it comes from with all this identifier, but also from which file, from which moment in time, from which row, from which byte offset the data point comes. This allows to identify and not only that, but also when this data point was created by whom. By whom meaning which version of the platform and of the ETL created the data point. This allows to identify issues and also to fix only the subset of... When an issue is identified and fixed, it's possible then to fix only subset of the data that is impacted by that issue. Again, this creates trust in the data to essential for this type of applications. We actually have different environments in our analytics solution. One that we call data science environment is more or less what I've shown so far, where it is deployed in our Philips private cloud, but also can be deployed in public clouds, such as Amazon. It contains years of historical data. It allows interactive data exploration, human queries. Therefore, it has a highly variable load. It is used for the training of machine learning algorithms and this design has been such that we... It is for allowing rapid prototyping and for large data volumes. Another environment is the so-called production environment, where we actually score the models with live data from generation of the alerts. So this environment does not require years of data, just months, because a model to make a prediction does not need necessarily years of data, but maybe some model even a couple of weeks or a few months, three months, six months, depending on the type of data, on the failure which has been predicted. And this has highly optimized queries because the applications are stable. It only change when we deploy new models or new versions of the models and it is designed to optimize for low latency, high throughput and reliability. It's no human intervention, no human queries. And of course there are development and staging environments. One other characteristic, another characteristic of all this work is what we call data-driven service innovation. In all this work, we use data in every step of the process. The first, the business case creation. So basically some people ask, how did you manage to find the unlocked investment to create such a platform and to work on it for years? How did you start? Basically, we started with a business case and the business case, again, for that, we use data. Of course, you need to start somewhere, you need to have some data, but basically you can use data to make a quantitative analysis of the current situation and also make it as accurate as possible, estimate quantitative or value creation. If you have that, basically it's, you can justify the investments and you can start building. Next to that data is used to decide where to focus your efforts. In this case, we decided to focus on the use cases that had the maximum estimated business impact. With business impact, we mean here customer value as well as value for the company. So we want to reduce unplanned downtime. We want to give value to our customers, but it would be not sustainable if for creating value, we would start replacing parts without any consideration for the cost of it. So it needs to be sustainable. Also then we use data to analyze the failure modes, to actually dig into the data, understanding how things fail for visualization and to do reliability analysis. And of course, then data is a key to do feature engineering for the development of the predictive models, for training the models and for doing the validation with historical data. So data is all over the place. And last but not least, again, these models, these architecture generates new data about the alerts and about how good the alerts are and how well they can predict failures, how much downtime is being saved, how many issues have been prevented. So this is also data that needs to be analyzed and provides insights on the performance of these models and can be used to improve the models. And last but not least, once you have performance of the models, you can use data to quantify as much as possible the value which is created. And in this way, you go back to the first step. You made a business value. You created first a business case with estimates. Can you actually show that you are creating value? And the more you can have this feedback loop closed and quantify the better it is for having more and more impact. Among the key elements that are needed for realizing this, I want to mention one about data documentation. It's the practice that we started already six years ago. It's proven to be very valuable. We document always how data is extracted and how it is stored in data model documents. Data model documents specify how data goes from one place to the other. In this case, from device logs, for example, to a table in Vertica. And it includes things such as the definition of duplicates, queries to check for duplicates, and of course, the logical design of the tables, what are the physical design of the table and the rationale. Next to it, there is a data dictionary that explains for each column in the data model from a subject-matter expert perspective what that means, such as its definition and meaning, is if it's a measurement, the unit of measure, and the range, or if it's some sort of label, the expected values, or whether a value is a row or calculated. This is essential for maximizing the value of the data, for allowing people to use the data. Last but not least, also an ETL design document. It explains how the transformation has happened from the source to the destination, including very important the failure-handed strategy. For example, when you cannot parse part of a file, should you load only what you can parse or drop the entire file completely? So import a best effort, or do all or nothing, or how to populate records for which there is no value, what are the default values, and how the data is normalized or transformed, and also how to avoid duplicates. This, again, is very important to provide to the users of the data a full picture of the data itself. And this is not just the formal process. The documents are reviewed and approved by all the stakeholders, including subject-matter experts and also the data scientists from a function that we have started called data architect. So this is something I want to give about, oh yeah, and of course the documents are available to the end users of the data. And we even have links with documents of the data warehouse. So if you get access to the database and you are doing your research and you see a table or a view, you think that could be interesting. It looks like something could use from a research. Well, the data itself has a link to the document. So from the database, while you're exploring data, you can retrieve a link to a place where the document is available. This is just a quick summary of some of the results that I'm allowed to share at this moment. This is about image gather therapy using our remote service infrastructure for a remotely connected system with the right contracts we can achieve. We have reduced downtime by 14%. More than one out of three of cases are resolved remotely without an engineer having to go on site. 82% is a first-time right fixed rate. That means that the issue is fixed either remotely or if a visit at the site is needed, that visit, only one visit is needed. So at that moment, the engineer will go to the site with the right part and fix this straight away. And this result on average on 135 hours more operational availability per year. And this, therefore, the ability to treat more patients from the same costs. I'd like to conclude with citing some nice testimonials from some of our customers, showing that the value that we've created is really high impact. And this concludes my presentation. Thanks for your attention so far. Thank you, Maro. Very interesting. And we've got a number of questions that have come in. So let's get to them. The first one. How many devices has Phillips connected worldwide? And how do you determine which related sensor data workloads get analyzed with Vertica? Okay, so there's actually two questions. So the first question, how many devices are connected worldwide? Well, actually, I'm not allowed to tell you the precise number of connected devices worldwide. But what I can tell is that we are in the order of tens of thousands of devices. And of all types, actually. And how do we determine which related sensor gets analyzed with Vertica? Well, a little bit how I said in the presentation is a combination of two approaches. It's a data-driven approach and a knowledge-driven approach. So a knowledge-driven approach, because we make maximum use of our knowledge of the failure modes and the behavior of the medical devices and of their components to select what we think are promising data points and promising features. However, from that moment on, data science kicks in. And it's actually data science is used to look at the actual data and come up with quantitative information of what is really happening. So it could be that an expert is convinced that the particular range of value of a sensor are indicative of a particular failure. And it turns out that maybe it was too optimistic or the other way around. That in practice, there are many other situations. It was not aware of that could happen. So thanks to the data, then we get a better understanding of the phenomenon. And we had a better modeling of that answer on the question. Yeah, we have another question. Do you have plans to perform any analytics at the edge? Yeah, that's a good question. So I can disclose our plans on this right now. But Edge devices are certainly one of the options we look at to help our customers towards zero plan downtime, not only that, but also to facilitate the integration of our solution with the existing and future hospitality infrastructure. I mean, we're talking about advanced security, privacy, and guarantee that the data is always safe and remains patient data and clinical data remains, does not well decide the premises of the hospital, of course, while we want to enhance our functionality and provide more value with our services. Yeah, so Edge is definitely a very interesting area of innovation. OK, another question. What are the most helpful Vertica features that you rely on? I would say the first that comes to mind to me at this moment is ease of integration. Basically, with Vertica, we will be able to load any data source in a very easy way. And also, Vertica can be interfaced very easily with all type of clients and application. And this, of course, is not unique to Vertica. Nevertheless, the added value is that this is coupled with an incredible speed, incredible speed for loading and for querying. So it's basically a very versatile tool to innovate fast for data science. Because basically, we do not end up. Another thing is multiple projections, advanced encoding and compression. So this allows us to perform the optimizations only when we need it and without having to touch applications or queries. So if we want to achieve high performance, we basically spend a little effort on improving the projection. And we can achieve very often dramatic increases in performance. Another feature is EU mode. That's great for cloud deployment. OK, another question. What is the number one lesson learned that you can share? I think that my advice would be document, control your entire data pipeline, end to end, create positive feedback loops. So I hear that what I hear often is that enterprises, I mean, Philips is one of them that are not digitally native. I mean, Philips is 129 years old as a company. So you can imagine the legacy that we have. We were not born with web, like web companies are with everything online and everything digital. So enterprises that are not digitally native sometimes struggle to innovate in big data or to do data-driven innovation. Because then data is not available or is in silos. Data is controlled by different parts of the organization with different processes. There is not a super strong enterprise IT system providing all the data for everybody with APIs. So my advice is to, from the very beginning, aim at creating as soon as possible an end to end solution from data creation to consumption that creates value for all the stakeholders of the data pipeline. It is important that everyone in the data pipeline from the producer of the data to the consumers basically in the data pipeline, everybody gets a piece of the value, a piece of the cake. When the value is proven to all stakeholders, everyone will naturally contribute to keep the data pipeline running and to keep the quality of the data high. Hope that's understood as a device. Thank you. Yeah, thank you. And in the area of machine learning, what types of innovations do you plan to adopt to help with your data pipeline? So in the era of machine learning, we're looking at things like automatically detecting the deterioration of models to trigger improvement action, as well as connected to it, active learning. Again, focused on improving the accuracy of our predictive models. So active learning is when the additional human intervention labeling of difficult cases is triggered. So the machine learning, the classifier may not be able to classify correctly all the time. And instead of just randomly picking up some cases for a human to review, you want the costly humans to only review the most valuable cases from a machine learning point of view, the ones that would contribute the most in improving the classifier. Another area is deep learning. And we're not working on it, I mean. But also applications of more generic anomaly detection algorithms. So the challenge of anomaly detection is that we are not all interested in finding anomalies, but also in recommended proper service actions. Because without a proper service action and alert generated because of an anomaly in the data, loses most of its value. So this is where I think we, yeah. No, go ahead. No, that's it. Thanks, sir. OK. All right. So that's all the time that we have today for questions. I want to thank the audience for attending Maro's presentation and also for your question. If you weren't able to, if we weren't able to answer your question today, I'd ask, well, I'll let you know that we'll respond via email. And again, our engineers will be on the Vertica forums awaiting your other questions. It would help us greatly if you could give us some feedback and rate this session before you sign off. Your ratings will help us guide us as when we're looking at content to provide for the next Vertica BDC. Also note that a replay of today's event and a PDF copy of the slides will be available on demand. We'll let you know when that'll be by email, hopefully later this week. And of course, we invite you to share the content with your colleagues. Again, thank you for your participation today. This includes this breakout session. And hope you have a wonderful day. Thank you. Thank you.