 And in this session, they will take an in-depth look at the digital manufacturing case study. So I will hand over to the two of you and a warm virtual welcome. Okay, thanks so much, Steve and the entire group to give us this opportunity to present today. So I know we have a very strict time, so we'll get started. Okay. Hello, everyone. Steve has already introduced us. So my name is Rajesh Shridha and I have my co-speaker, Yasho Vatan with me today. And we also have Lukas Bern from one of our other colleagues who has also worked very closely in this case study. So keeping in mind this year's Open Groups digital theme, the digital first theme, we bring to you a case study of digital in practice in our manufacturing shop floor under the umbrella of digital manufacturing. So we'll share our experience today on how to architect the solution for a very collaborative visual quality application, leveraging the new age technology advancement in the area of computer vision. So computer vision is all about making computers see and interpret the world as we humans do. It leverages a lot of artificial intelligence and machine learning algorithms for processing these industrial images. And why the word collaborative vision here, because it's not just about machine vision, but the emphasis more on collaboration between human and machine and machine communication to bring meaningful insights to the table. So with that context, we get started with our presentation for today. So why we have this, we have been hearing about industry 4.0 for quite some time, but if you actually go on the manufacturing shop floor, we are still not there yet there. And even if we have done something, it's only partial achievement that we have done. And why is that so? It's because the over-increasing complexity in the production processes that has grown manifold over the years. Even if you want to do a digital transformation, it's very difficult to do in our already existing Brownfield installations. Plus, even if you want to integrate a new solution to get them into existing production line, again, it's a very complex and time consuming. For example, if you want to even install a simple camera, it might take you weeks or months to do so. And of course, the final the ever-increasing pressure to significantly increase your production efficiency. And if you look at today's time of the corona world, it becomes even more evident and pertinent to how to make sure that our shop floors are quickly adaptable, are resilient to these changes and are very agile. And so how do we make sure that our plants is really agile and we react to things very fast? One of the very important measures to measure effectiveness of a plant is the OEE, which is the overall equipment effectiveness. This is one of the key KPIs to measure a plant's performance. And how do you calculate this? This is basically a product of availability multiplied by performance multiplied by quality. And while we have various solutions to address each of this area for today's discussion will mainly focus on adaptive quality control. So what do we mean by that when we talked about coming up with a collaborative visual quality application? We start with a simple example of setup, how it looks like in a real shop floor. So if you see here in a plant, there's an engine which is attached to an inspection element. And it requires a visual trigger to automatically trigger a lot of, automatically trigger capturing of images and screens that can be analyzed later. And at the bottom, you can see a conveyor belt from where the engine is actually transported, RFID readers attached to it, and all that information is passed on to the PLC gateway. On the right-hand side, if you see, there are lots of equipments that you see attached on this plant shop floor. So it's not just about installing a camera, it's about how do you attach it? What is the compute device that's going to compute all the data that it captures? It's also about the PLC gateway which is capturing all the information from the traditional conveyor belt. And this PLC gateway actually plays the important linkage here to marry the old way of collecting information and the new solution, this whole computer vision solution that we're going to deploy and take the data from both of the systems and pass it on to the other business idea systems on the floor, which is for two purposes, either you give the results back to the system to make some decisions or you get some inputs and actions from the business idea systems to give it back on the shop floor. So this is about a small view of how this whole inspection case looks like. If we move on to the next slide. So while this seems to be okay, there was all this hardware involved and this whole integration, how it works fine, it doesn't end there. If you really want to take the solution to production, there are a lot of other obstacles and challenges that we have to address. So very first thing, how quickly can you take it to production? While we were successful in doing some of these things, installing a camera, training the whole solution, deploying the models in two days, but still there is scope to improve it further. If you can just install cameras, capture all the images and do the process later. But there's still a scope of further improving it. Plus, how do you make sure your solution is global? It's not influenced by local provider and local solution, but truly global to make sense of it. The other important challenges, how do you make sure your solution, this computer vision solution is resilient to changes? The same solution should work perfectly fine, even if there's a change in the environment and process variations. For example, for the night shift workers, when there's not enough light, the results should be the right for a worker to interpret. So it's very important that your solution is resilient to changes that's happening around you. The next important point and there's always been a concern, even today on the assembly floor, about 90% factory workers were doing a lot of work. And now introducing this system or computer system, which can also take decisions for you. It should be like more collaborative and working side by side. I should not be a threat to the worker that tomorrow it will replace my job. So how do you handle these delicacies also very important? And then finally, the most important thing, how do you design your solution to focus more on the anomalies, abnormal scenarios, rather than focusing on all kinds of errors that can happen. So that you don't have to train a model for huge amount of errors and which could become quite tricky and complicated. So how do you go about in the journey in a shop floor here? So depending upon which phase in this whole industry for point to automation world that a manufacturing shop floor is, we can start at different levels. So if we start from the right hand side, the very first level, what you can do is, if a shop floor already has a camera attached, but it's not performing to the optimum level, there's a lot of errors slippage that's happening. We can quickly integrate new algorithms to the existing solution, do the analytics and make sure the performance is better than existing systems. This is something that you can do very quickly. Second step is how you can install new cameras at the new spot, especially where there's more error prone, quickly install that and start capturing all the images and do the analytics. And the last part, which is of more interest to us is all about collaborative vision. How do you make sure you quickly analyze this images, process it, and if you see anomaly or something that's not right, you immediately inform the worker and the worker can actually take a quick decision then and there. So it's about how early you can detect the problem before it goes downstream and there is some downtime involved. So because it's just upon this word collaboration and this is also the title of our whole solution, what do we mean by collaboration. So this is a delicate topic is about human machine collaboration, and we can think of it more in a psychological manner it's about PCA cycle, which is about perception cognition and action. We as humans perceive the world, we take a lot of actions, and cognition is something that is in between that does all the magic to comprehend all that's going around. And this, and this humans have mastered over the years with our brains which has got experience knowledge and understanding in doing this. How do we achieve the similar thing in machines, and we have already done so computer vision is already quite successful there. But the idea about here is how do we make sure this human and machines collaborate with each other. We need to make sure that action of a machine fits well into a perception of human and vice versa, and action of human can be perceived well by machine. If they can handle this well seamlessly, that can make our shop floors even more smarter. And that's what we'll get into digital factories, smart factories, that's all around the place. So now we already discussed about how a visual inspection plant looks like, what is involved in terms of collaboration that's involved between a human and a machine take the decision. But still to take the solution to actually get it production ready, there are a lot of other building blocks required to make sure they all work in tandem to make the solution work end to end. This is not just about computer vision identifying the images, doing the classification modeling, but it's also about how do you trigger a camera automatically to do this work? How do you trigger shop floor? How do you, what all things you need to calculate a result? Once it's deployed, how do you monitor it? What's going on with the solution? So a lot of other aspects to deploy a solution end to end is also required that needs to be thought through. So the more challenging work here is about the whole integration, which has to be contextualized in the context of problem statement, and then share use and meaningful results that a shop floor can use. Okay, so if I go to go to a little deep dive of what is involved here, actually to make the plant floor more smarter and improve the OE. It holds to the whole thing starts with the PLC trigger. It captures a lot of events. The shop floor connector actually processes all the events, extracts the required data, gives it to the next processing block here, the image processing. And this image processing block doesn't handle just one image, but several images to make sure it can come up with a good result. Then it passes on to the whole actual evaluation of those images. This is where the actual, the whole core AI and different models run to identify the objects correctly. And then the next step, you have a scene evaluation where you actually recreate a complete scene by taking the relevant images. And then the final step, the whole calculation step, which is very important. It's not just about the result, but also evaluating the result with respect to various elements in terms of the confidence level, the score. Is it really optimum? Is the result really making sense? And if once it's the confidence level is high and looks good, that's when you publish the results back to other systems. It could be back to the PLC itself to take some actions or to other systems on the shop floor like QM, the other business systems or quality management systems. So this is kind of a life cycle, how to handle our image processing end to end. So now I'll hand it over to Yashovedan, who with deep dive of what is involved on the IT side to implement this whole solution. Thank you very much Rajeshri. Am I audible? Yes. So as Rajeshri talked earlier, we have seen the rapid pace of artificial intelligence adoption between the manufacturing organizations. So there is a pressing need of because of the business to act faster to reduce the failure rates of the equipments to come up with better outcomes to reduce the cost. So prime examples where the AI intervention is required is like, for example, you are trying to fully automate and reduce the error significantly or like assembly of the correct parts identification of defective ceiling sims within engine identification and missing seals which can cause an implosion or a broker or missing pins in a direct injection module. So these are the examples which if they are not treated very early in the life cycle, they can cause severe problems with the machine itself. So they're looking at collaborative vision solutions which can generate accurate and meaningful insights to meet the business trends and customer demands. So as Rajeshri said, it utilizes artificial intelligence to process and analyze digital images and videos. And it is based on the deep learning technique where you take the information, you pass, you perceive the information models which are contained within the images or videos in a similar way well as a human can see. But the computer vision can utilize additional inputs which are available outside a human's audio visual capabilities. So deep learning is a new approach which organizations have been trying with a great success and it basically uses example based learning, a lot of neural networks to identify the mistakes which could happen or the errors could come our way. So they analyze, they locate and they classify the objects so that they can categorize there is any abnormality in the way the products are getting manufactured. So the organizations are employing automation strategy which leads to efficiency improvement in their internal processes, their complex inspection processes. So we have proposed this architecture and we have successfully implemented it at number of customers in manufacturing industries, automotive industries. So it consists of three layers, primary logical layers, one is the periphery layer, there is a core layer and the infra layer. All the layers they conform to a principle of interoperability as well as the open standards architecture. The core engine here you see in the diagram is powered by an ensemble of cutting edge deep learning technologies which are trained with carefully augmented data sets. So it contains open source algorithm, it contains proprietary algorithms and also contains sometimes fine-tuned variants and chain algorithm logic and other techniques which help us to get to the results very quickly. So it interprets the events with the video and then visualizes the finding and then automates the response to that event. So we use a very highly scalable and accessible interface BLC which can be like the output point from the architecture where you can trigger the action from the intelligent insights which have been derived by the platform. So the solution will automatically index and store video fragment as it ingests the data from a live source and then it learns from a plea defined classes of objects such as equipments or spare parts or additional machinery. So the system can configure alerts if the desired detection or tracking pattern is not matched so that can identify abnormalities. So typically any architecture using the computer vision will require data ingestion, data preprocessing or multiprocessing training, transfer learning and future engineering and sometimes also hyperparameter tuning and the model evaluation. So the data ingestion layer here, it captures the data from multiple sources of inputs as you can see on the left side, a camera which can give like a streaming image or streaming video collection, external image sources and also the PLC inputs. And so we have used our methodology in a way where it is consistent with the industry standards such as the Chris DM models. So all the model development model deployment and the recalibration is aligned to Chris for example. So the performance metrics are also captured in the system. So depending on the use case, depending on the models built, we use the machine learning services as you can see from the top part of the diagram. There is an object detection pipeline that is model management system. So the model system can also go into identifying what is metrics being generated in real time and also feed into the video and enrichment object. So that it can automatically try to tune the model on a periodic basis. So it can be either event based trigger, it can be time trigger like monthly tuning of the model and the events which form the inputs from the multiple image sources. They also help facilitate model retweeting, retuning. For example, if there is a seal fault based on the location, based on the material use, based on the composition, based on the pressure. Next time it can tune the machine model much quicker and also update the training data. There are several algorithms which can be used in this. So there is AlexNet, there is GoogleNet, VGG ResNet or CNN model. So we have tried with multiple algorithms and we are also trying to take advantage of specialized hardware which can be used for computer vision application like Gen processing units or GPUs and all. So the backbone of the communication and the various connectors with our architecture is the OPC, which is open platform communication protocol which is like a language for communication between different machines in the industry. And there are two models. So we have used the UA model which is the OPC unified architecture model. It's a cross platform protocol for machine to machine communication. And also sometimes there is a requirement to translate the data coming in from one system for using the another systems. So what we use this platform connector gives the ability to log different open formats, connect to related databases, send notifications by SMS, email or voicemail. And also provides a data historian and alarms an event as well as data action. So this becomes a quite open architecture. And it provides the benefit of having a single solution from embedded to the enterprise and it builds on the existing investments which organizations would have used in their OPC and com models. It is also cross platform and it's internet and file are friendly and also as a standardized security model. So the security concerns can be added. Now there is also a stop model which is interoperable wire format which can communicate with any message broker and that helps us connect to MQTT or Apache MQ or zero MQ for communication with further downstream systems. And please move to the next slide. I'll try to speed up a little bit. We had done an extensive comparison based on the weighted scoring between different parts of the machine vision and computer vision. So machine vision is a typically a camera mounted at a manufacturing plant and computer vision is a PC based or a processor based processing where you can take the image in a dim light and enhance it and then utilize it for that. You can do enrichment you can do aggregation you can do merging of the images you can identify pixel boundaries and all we prefer computer vision which is more of a machine learning approach. So coming to the end like we see that AI is ready for productionization. So there are there are problems which cannot be solved with the traditional technologies. But typically the advice or customer to start with the feasibility study to see what can be reuse what can be the synergy with the legacy systems and then they can invest in the adoption of machine learning or deep learning models. And this will help them in moving to a very high grade capability like interpreting moving away from interpreting pixels to moving into analysis of shapes or making the turning the camera on and off to get more accurate images making plug and play and improving the production in a way that it leads to more cost saving and continuous improvement can be also incorporated in the model. So this is the organization experience in multiple areas of manufacturing right from foundry all the way to logistics and we have implemented solution across this area and continuously taking the opportunities to improve in our models and overall offering so that it can solve the real industry problems. Thank you very much. So thank you. Thank you both very much for that for that presentation and the yes it is time for Q&A you're perfectly perfectly on time. We've had some questions coming in and but they're being answered in the chat in the in this in the same answer possibly by a colleague of yours but let me let me pick on one of those. And it's about the second part of it was are you considering 5G and the answer is the first plants are going to 5G now. But it's still early days in contrast to the marketing buzz. That's something that we that we hear a lot a lot of a lot of talk about the 5G but but the real implementations at this stage are few and far between in terms of demonstrable benefit. Can you speak a little more about how you how you see or what benefits you would see from from 5G in in this scenario. Okay, so this is now Lucas. So we see a benefits if we have to attach the cameras to moving vehicle. So in many applications we use fixed cameras for instance looking at the assembly line. But if you move cameras to for instance AGVs autonomous guided vehicles, then you don't have the computing power on the device and then 5G makes absolutely sense. You just have a smart camera and everything done else is done in the background. Right. Right. Okay. Thank you Lucas and it was you answering in the background so official answers we can say they are. So registry and yes you're both architects I know senior senior ones, but a question that we that we got a lot is, is, where does, where does architecture fit in enterprise architecture specifically fit in the transformation activities that you're involved in with your and what do you see where does architecture fit with in your in your customers is there is there a typical role or varied. So we see typically there are customers are typically looking at multiple large complex multi tower projects or programs. And they're also need to work jointly for the harmonization of the initiatives within the organization where there are large consumers downstream as well as the legacy systems feeding into like a large program. So there we see need for enterprise architects more because you need to have a holistic view of how the data models will evolve what will be the data stewardship what will be the data governance what will be the storage network infrastructure what will be the software architecture. And how does it scale to the demands of technology for example I am from big data so technology changes every three months. How do you create a scalable architecture which will be kind of future proof to maybe next three years, not maybe beyond that because you don't know where the technology will go. There we see a dynamic of enterprise architects. Okay, and I can I can see Lucas is answering some other questions. Okay, to this one before he does. How, how can machine to machine communication achieve. It's just moved how can it how can it achieve better autonomy on the shop floor basically has a this environment, especially in manufacturing and the process industry. How can machine to machine communication achieve better autonomy on the shop floor. So I think it's not only about communication it's about collaboration. And if you want to achieve this one machine has to make sense of another machine and this is where we really see a lot of benefit using opposite way. So it's really about communication at what is the machine capable of doing, and then the machines can directly so you can, you can find out what is, what can machine do and whatnot. And if you're an hazardous environment you should know what is doable and whatnot. Yeah, so this is collaboration that's the key, not only connectivity. Okay, thank you. Well, thank all three of you very much. And Lucas for answering those, those questions and being put on the spot and registry and yes you add and thank you very much for your, for your insight and a virtual round of applause from us.