 Ευχαριστώ πολύ για την παράδοση. Είμαι ο Μπερικλής Δραμμός. Γεια σας, μία name is Novti Psharmo. Και θα συμμετέχουμε για να αναγνώρισουμε και να προσπαθούμε ανθρώπισης για την υπηρεσία της εξαναδικασίας της εξαναδικασίας. Λοιπόν, πρώτας, όλοι στην κατασκευή της virtual, οι εξαναδικασίας της εξαναδικασίας για industry 4.0 έχουν δει ένα εξαναδικασία από τα τέστας χρόνια και είναι όλοι προσπαθήθηκε να αγιωθεί. Γιατί αυτό αυτός γιατί, γιατί η virtual reality έγινε ένα καλύτερη υπηρεσία στην όμως της εξαναδικασίας της εξαναδικασίας. Γιατί είναι η σύγχρη, είναι η καρδιά, είναι οσχερό, και είναι όλοι είναι πιο πολύ επιθυμμένος, γιατί έχεις λασί για θέα στη εξαναδικασία, για την κατασκευή της εξαναδικασίας του εξαναδικασίας. Όχι σκέφτερες χρήματος ήρθετε να με ανεθρώψοτε. Περκώς, ήταν έναν πολύ μεγάρο εξαναδικασίας που διεθνούν their stuff with virtual reality but it's not far in the future where specialties like doctors or surgeons will be trained with virtual reality as technology progresses. Augmented reality on the other hand has found much more applicability on the job and in the future augmented reality glasses will become as an essential part of a worker as a hard hat. The reason for that is that first of all they offer handshift manuals, they offer remote guidance which makes very good use of the camera that most AR glasses have and it also has automated guidance which is taking the SOPs and the manuals to the next level making them interactive and making them much more easy to use. And companies have already started using it, it has found very good usage in logistics as you can see in that photo but also in aerospace manufacturing which gives guidance to the workers where to place what hand having much less errors and making mistakes in such an environment can have a huge impact both in terms of revenue but also in terms of health and safety. What this talk is going mostly to be focused on is prediction of those errors and their prevention. The data, the intersection between data science and AR VR is of course the data and the data that we have been dealing with is the motion data coming from AR and VR because they do double as motion capture systems. And those were the tool that we used for our project, we used an Oculus Rift for virtual reality which you can see that it has both a headset and the two controllers and a Microsoft Hololess for augmented reality. And those systems very accurately record the user movement of the user, both coordinates and angles for the headsets, timestamp, producing the trajectories but also the objects in the XR environment and the interaction that the user has with them. A lot of industries still have to train their workforce in really dangerous environment where any mistake can lead to injury or even a loss of a lot of expensive drugs and liquids. One of the simulation that we create at Accenture was BioVR in which we train people how to work with chemical labs and chemical liquids. And to analyze the data and to run the prediction we designed the XR Insights platform which is at its very core, it's a streaming platform where you can bring in your own AR and VR model. And also data scientists can plug in their machine learning model as well as define their schema to consume the data. The type of data we are streaming at the moment is state and event stream. The state stream considers considering all the spatial data with coordinates of head, hand or any of the chemical bottles and all the coordinates that you have in the virtual reality. Event stream is a series of activities that the user perform within the virtual reality. So something like moving hand, picking up object, picking up glass, something like that. In going into the type of analysis that we wanted to perform, there were two major analysis we wanted to perform. One is to track how a user has improved over the training periods. So during the multiple sessions we can track very accurately with the time number of mistakes someone is doing in the virtual reality while getting trained. And also personalizing the learning experience for an individual because everyone learns differently. So not putting generic predictions out there whereas building a stateful machine learning model which can learn how a user has behaved in the past and how he is going to perform the next task. So let's go how we went on to achieve this. This is our machine learning pipeline which of course is divided into real-time processing and offline processing. In the real-time processing of course we have the user immersed into the XR environment and we collect the data from them as previously described which are then sent into our real-time processing pipeline. One of the purposes of this real-time processing component is to collect the data and send them to our historical database. From there we have the data collected from several users which the first thing that we do from them is the first step of the machine learning processing which is to extract the features from them. We will talk about those features later but those features serve dual purpose. They serve first of all the purpose of giving us insights as to what the user did during the XR experience but they are also used as an input for our predictive analytics model. As an input for training our model. So after we train our model we store it in an S3 bucket and then that makes it ready for deployment, connecting it into our real-time processing component. An important part is that the same way that we are calculating the features offline, the exact same procedure we need to calculate for calculating the features online. This is why we have to connect them this way through a registry schema. Once we have the same mechanism of calculating features online and offline we can then go ahead and produce the features from the user in real-time which are then fed into our prediction model again in real-time. The prediction is made in real-time and is sent back into the user into the XR experience thereby closing the loop in real-time and providing a prediction. So I will try to walk you through with the high-level architecture of this solution. The key motivation behind designing this architecture was prediction in context and prediction in time. Prediction in context refers to building a stateful machine learning model where machine learning models can remember and request a point in time information what a user has done in previous tasks. Prediction in time refers to minimizing the latency between machine learning model and the visualization of the output on the user's end. The stream flows from the XR devices over state and event endpoints. It connects to our WebSocket server which has integrated Kafka producer and Kafka consumer. Then the Kafka producer pushes the data to Kafka cluster. Then we do some data transformation to handle the sessions and Kafka Connect is using... We are using Kafka Connect to serialize all this data in our master database which is always going to be append only and we don't really... We never change that actually. This same master database is now being used to train the machine learning models. The similar was that Perklis had just explained and that goes to the bucket and we can deploy any number of machine learning models because it's a message-based architecture. Many machine learning models can be connected and consuming the same messages as compared to other architecture services. We also have real-time reporting because tracking how well a user has been doing and how many mistakes he has done and also tracking the predictions. We've designed a real-time reporting component as well and all these predictions get consumed by Kafka consumer and Kafka consumer pushes it back to XR devices. So this is the whole architecture diagram on a very high level. One of the major components for us to design a mainly decoupled system was schema registry so that your devices and your machine learning model and any of the infrastructure services don't have to talk to each other directly to know what schema of the data is. A central schema registry really allows you to define your schema at one central location and any component which want to consume and push data, they'll have to get the schema first and start producing or consuming the data. Here's an example. It looks fairly like a JSON structure but once you define your schema, it's backward compatible as well. So as your solution grows, you add more objects in virtual reality, you can change your schema and none of your components have to ask each other what has changed over the evolution. Right, so features engineering and events engineering and prediction. So in our world, we went at the end with the decision of having events and features being the exact same thing and that really made our life quite simpler. So in terms of events, we have the raw events coming from the XR experience, things like the user pressing the left or the right trigger or the left or the right grip or both picking up an object, dropping an object. Those events in conjunction with the trajectories can provide the derived events which is, for example, field of view events, whether the user was facing the objects of interest or not, their hands moving towards the objects of interest or away, where the item that they were picking up the correct one or not, where they touching the wrong item, hitting collision features and so on. But what I think are the most interesting one are the behavioral features. In other words, did the user made many small movements or a few large ones, whether hence jittering. Those are features or events that they were not calculated once, but they were calculated over a period of time. And you can already see that from those events slash features, you get quite a lot of insights, quite a lot of understanding as to what the user was doing in the XR environment. But we want to have predictions and the predictions will be the exact same events or features, but in the future. So by having some events in the past, we want to predict what they're going to do in the future. And this type of similarity that we all want them to be together dictated the type of algorithms that we're going to go ahead with. So we chose, first of all, statistical correlations and then association rules learning. And the advantages of those algorithms were that they could associate like for like, unlike many algorithms in machine learning. They were suitable for a small data set and that's what we had. They were also very, very transparent. We wanted to have a transparent algorithm because associating particular features at a moment in time with particular features at a later moment in time is a very important insight in itself. And we wanted to have that in our model. And last but not least, we wanted an algorithm that it was flexible because different events have different importance for different environments. For example, in a pharmaceutical environment, it will be very damaging if you are to drop a very expensive vial or a very expensive liquid, but not so much if you are putting cables in a manufacturing environment. Okay, so as I said, the system is very decoupled. So if you choose association rule or any more complex deep learning algorithms, you can choose any of the algorithms and plug to the system as long as you're following the response schema. The response schema is designed to take the responses from machine learning model and serve it on the user end. So here's an example of response schema where the highlighted part is the actual text which the user can see. And this is the raw and this is when you actually go and wear the headset on a virtual dashboard or an audio headset, you can see this prediction coming in and which is very much based on how you have behaved throughout the task and also managing one simple application for multiple users as well. So this basically explains that any of the messages that we send back to the XR Insights or the VR devices can be generated by any of the machine learning models. Yeah, to finish, I've just going to go through some of the key learnings was we use Kafka as our backbone of the service and the consumer group really allows you to spin up machine learning models or instances of machine learning models within the consumer group for parallel execution of and data consumption. Schema registry is a single point of schema changes and worsening. So if the VR developer changes the schema and data scientists don't know, then they don't have to worry about because there's a central schema registry for that and data consumption will go as smooth as it was before. Avro, we used Avro, not JSON realizing that Avro saves the data within a binary format. So a lot of metadata that JSON carry, we don't have to send that because at the end any component can use the schema to convert the data. We didn't pursue a very complex machine learning model because we wanted to build an infrastructure where developers and data scientists can bring their own model and plug into the system. To design truly human and machine interaction, we designed this event driven architecture where all the events either generated by human or machine in form of prediction or features are equals. And also event driven architecture basically allows you to build a stateful machine learning model where your machine learning model has access to all the instances which has occurred in past. And the last is we've tried with an AR application on the same platform and we think there is a really good possibility to train machine learning model in virtual reality and actually provide people guidance with their VR glasses by deploying it there. Thank you.