 Good afternoon everyone. After lunch this girl is going to be an interesting session but at least we're not talking about security. My name is Paz, I'm one of the in-year specialist solution architects at Redhout and I focus on Kubernetes cluster management and automation. I'm Luca Ferrari, I'm Italian and I'm an edge computing specialist so you might have heard Redhout recently is investing in Edge quite a bit and I have a background in integration and API management. So a few years ago I had a call from my father's neighbour, they called to tell me that they heard my dad's little dog barking in the garden and when they went to find out what the problem was they found my father lying down on the patio unable to get up so he had a fall and he couldn't manage to get himself back up. Fortunately that day they called the ambulance and by the time I made it there my dad got the medical attention he needed but this wasn't the only time this was happening. My dad suffered from Parkinson's and Parkinson's related dementia so if he didn't have someone with him all the time to remind him of his limitations he would set up on walks and sadly sometimes on that cold day the floor would be slippery he would have a fall and he couldn't get himself back up. Sadly as probably a lot of you are aware this history is not unique. In fact according to the National Safety Council in US a big chunk of falls that happen at home and cause preventable injuries and sadly injury related deaths are related to falls and these falls are very common amongst the elderly and you know we're facing a an aging population so this problem is only going to get worse and worse so if the elderly don't get the care they need at home they're very likely to have a fall they might neglect themselves and they end up in A&E in the hospital and that means pressure and the health and social care. So we're just looking in to see how we can use technology to help with this problem. The reason that we are looking into this assistive care today is because the hospitals are already overwhelmed with a lot of day-to-day problems. They have a lot of legacy system and devices that they need to integrate and maintain. They produce a large amount of unstructured data by the medical equipment that needs to be maintained and they need to be compliant with industry standards. Patients are expecting to be able to look at the data online and on top of this the hospitals don't have a big IT budget or IT teams to be able to resolve these problems and that's why we're trying to see how we can use open source to relieve this burden of at least with the elderly and the assistive living from the hospitals. Yeah so given the challenges we've seen there are several advantages of using an edge computing approach when it comes to assistive care and using open source at all layers. So first of all in terms of right at project and products you can see that we started adopting at the edge the same platform that we adopted at the core which is basically Kubernetes so that brings manageability advantages so the team doesn't have to learn a new tool basically or new technology. Then there is security by default at all levels so through tools like ACS advanced cluster security you can implement policy that automatically secure new clusters or in the case of assistive care secure new homes. Then there is a whole partner ecosystem so I don't have to explain it to you but the power of community here is pretty strong so we can for example connect two legacy protocols through library developed by community and we have a whole set of partner products that can be deployed at the edge or at the core in differently. And then eventually in manageability and scale so when you think about an edge architecture what comes to mind is actually day two or thinking about deploying the second third or eventually a hundred cluster after the first one so deploying the first one and managing the first one is quite easy but what about managing at scale so then you have tools like advanced cluster manager that help you with that scenario. So we came up with actually a reference architecture in this case and with the specific use case of assistive care and we look into literature on what are the type of sensors and scenario that can help detect a possible death at home let's just say. So there is a full mix of technology as you can see here quite a recipe I'm pretty sure you're familiar if you're been working a little bit with home automation with raspberry and Arduino so I will not explain a little a lot about those. We use two different messaging technologies so you might have heard of NQ Broker as in Artemis as a project so that's a store and forward broker we used it for NQTT messages but it's a multi protocol broker then we use NQ streams so that's actually right at packaging of Kafka so you might know the project as string Z and we use it for event streaming scenarios right so when you want to process data at the core. I don't think I need to explain anything about OpenShift. Fuzz will be able to answer any question on ACM after the session and we use Camel with Quarkus so there are also quite a number of talks today about Quarkus there are now extension to actually run Camel integration on top of Quarkus runtime so to make integration even lighter this is for all the scenario where you want to integrate legacy or industrial protocols for example at the edge then we use for the monitoring and presentation part Grafana and timescale timescale is actually a time series database that's included as part of Drogue that you see here and Grafana allows you to build custom dashboards so in case the hospital wants to have current state monitoring of what's the patient scenario they can do that then we use Ansible for all the event driven automation case that Fuzz is going to explain in a bit and I'll deep dive a little bit more into Edge Impulse and Drogue so Edge Impulse is what it says I just call it a studio an online studio for ML Ops so if you're familiar with OpenShift data science it's not that dissimilar the difference is that it's highly focused on machine learning at the edge on very low resource devices so that's especially important because a lot of this sensor and this platform are not really single node OpenShift are not really beefy right so if you want to run some machine learning model on stuff like Raspberry that's really good a good tool the other interesting element that you can see here is that you can start building your model directly on your smartphone and then deploy it as a test case on your smartphone either Android or Apple this is Drogue instead this is a reddit project you might want to take a look at it if you're interested in to processing data and coming from IoT environment and there is a whole set of layers so in this case this was the diagram was more focused on automotive use case so you can see the card there but basically there are two main concept the devices and the application so the devices are at the left hand side they are basically all your sensors and actuators and then you have the application so the application are what the end user might use or develop and typically several devices are associated to one application so you have an ingestion point through the end points to the left so Drogue supports HTTP, Hope and NQTT then eventually there's a data streaming component through NQ streams so you see there Kafka there is a whole set of authentication both related to devices and applications with Keyglock and you also have a set of device management and device registering functionalities okay eventually the data that can be processed filtered and even there's a basic rule engine is exposed through integration through the right hand side where you can see Web sockets and QTT, Kafka and even serverless events so this is the architecture we came up with as I was explaining before we used NQ Booker to ingest the events coming through NQTT communication from the sensors then let me just maybe switch to the actual data flow so then the this is oh yeah it is animated so then the messages are eventually stored in the event streaming part of Drogue, Drogue will apply some filtering so for example will not actually activate any alert in specific scenarios it will also store for historic purpose all the events in the time scale maybe for later exploration and then it will send and then Ansible will actually get triggered on specific situation through events on a specific topic on NQ streams eventually Ansible will trigger a call or in our case a message to telegram so the idea here is that given specific scenarios that will explain soon a nurse will be alerted that there is a patient to be visited she will travel to the assisted house. Right so the use cases we've tried to implement using the technology just saw are one of them is the fall detection alarm as a wearable and the other one is a fridge usage monitor so essentially just to monitor the state of whether the fridge door is open and closed and then decide later on what we need to do and also the scenario of combining these two through sensor fusion who knows about sensor fusion who is what concept of sensor fusion before okay so sensor fusion is just the process of combining the data that we get from different sensors or different disparate sources of information in order to get a better understanding or a clear picture of the whole situation and a high level that's explanation so it's just combining the data that we get from our sensors so for the fridge usage monitor so we divided the use cases into two parts I worked on the fridge usage monitor and look how worked on the fall detection alarm for the fridge usage monitor what we used was an Arduino Uno Wi-Fi and then the read switch connected to it the read switch is just an electromagnetic switch that based on the voltage higher low you can decide whether the door is based on the magnet being close to each other you get the higher voltage and then you can detect what the door is open or closed so I started implementing the code on Arduino Wi-Fi which is this little boot here which worked really well to begin with and gave me a false sense of security but as I went along and I was trying to implement SSL it took me a long time to get to do that and then I realized the libraries that I was using on this Arduino Wi-Fi are not do not support SSL properly so then I switch over to this ESP WeMask board and I managed to get SSL working on that which was a good success and just to show you a little bit of the code snippet from the Arduino you know as you can see we're just reading the serial the data that comes through a serial board and we're just sending the information through that using the MQTT client of the that's available on Arduino libraries we're sending the information to our MQTT broker. In case anyone hasn't come across MQTT broker it's just a lightweight network protocol for a pop-up and messaging. The other use case was based on an interesting scenario and device so the device you see on the left hand side is this small thing so interestingly enough this can run a very basic tensor flow light model and you can deploy it using a Gimpus as I was saying before and then I basically aggregate all the measurements through Bluetooth on Raspberry Pi so this then communicates back to the MQ broker in snow. So I'm not gonna show you any C++ code because I'm not really proficient in C++ but basically Edge Impulse generates after you I'm gonna explain it afterwards but the way it works you generate and train the model and then you can export it in several ways one of them is just an agnostic model which is just in C++ means you can deploy it almost anywhere and as you can see there's the structure the model parameters the SDK which means which is the runtime and then the actual tensor flow light model. So the way it works as I was saying before if you've been experimenting with OpenShift data science or OpenData Hub I think it's the upstream. It's just a standard MLOps platform so you get to design the model using several libraries you get to train it and collect new datasets you get to test the accuracy of the model and eventually deploy it on several Edge targets. So this is an example of the interface as you can see here in our case we were interested into measuring the acceleration in XYZ axis so that's one thing the Nano can do. There are other sensor package on the Nano and based on this variation the model basically identifies whether this is somebody falling or not. Eventually you'll get to be able to deploy on several tiny platforms so as you can see there are several Arduino platforms supported but there are other certified platform for Edge inputs. So there is an alternative to this tool which is not online is completely on-premise which is tiny ML so if you want to explore that instead of using the cloud approach it's also worth it. This was basically to recap a little bit so we were up to the sensor layer and eventually we decided to introduce NQ Broker to actually store the messages for persistence so somebody might ask why the sensor are not directly communicating with the cloud. The OpenShift platform is for persistence and resiliency reasons so we configure NQ Broker so that it exposed among the several protocols they support NQTT and as you can see we created a queue and an address related to this type of traffic so they are doing one and you can see an example of messages so you can browse the queue and see the continent of the message and then I configure also Drogue so I created an application corresponding to the end user application so in this case it was just a notifier application and this notifier application had associated several sensors so as you can see I can also create an IoT gateway in Drogue and you can create a hierarchy between the actual IoT gateway and the other sensors. Okay so so far we've seen how we just get the data from our sensor that come to AMQ Broker redirected to Drogue and as a part of Drogue as you can see there's an AMQ Streams section that comes with Drogue so now the notification part involves the data from AMQ Streams or PAKA and ASAPO, EDA or Event Driven Automation. So what you see here is an example of the Event Driven Automation using ASAPO. This is the main component is called the rulebook. Within the rulebook we've got two sections which we've got the sources section which we have different plugin sources for it in this case we're using Kafka and then we've got the rule section which based on a condition and action is trigger and as you can see it's quite simple the usual ASAPO style so we're just saying that if you're getting that condition that the door closing is detected from Kafka trigger a playbook and the playbook with guards and that's just the automation controller as you can see that playbook is just running in there and the playbook is pretty simple which is using the Telegram collections to send the notification and eventually when that is done we will get the notification message of the carrier or the family member or the nest we'll get the notification on their phone whether a fall is detected or what's happening with the fridge door and so forth so this is what we've done so far with these two use cases still work in progress we're still working on improving them and we hoping to start introducing more use cases to this POC so you may have noticed we haven't talked about camel caucus part of the architecture because we haven't implemented it yet but we will we're looking into implementing some sort of local level one alerting using camel and we haven't talked about Racken because we've been focusing on the application side of this POC and what actually you can do with it Racken takes care of the infrastructure side we have used ACM in order to provision our signal node OpenShift cluster and also implement the configuration and application and that's just most of you are familiar with this if you've worked with Racken that's just the clusters as you can see there so to conclude on some lesson learned so you might have already been fighting with certificates TLS certificate and DNS are usually the culprits of issues when it comes to development and integration also the test case I was showing with the manual it's sold through a USB cable even if you have a really long one you might actually think about adding batteries that somebody can wear it on also it was quite hard personally to develop this joint project since we're not in the same country so we and we are also not really developers so we had a tough time using remote calls for programming I just say also we were showing the signal node OpenShift on AWS my initial idea was to bring an industrial grade server to the stage but it's really hard to actually connect to both power and network in general we also were trying to have fast connecting to my homelab for some tests but VPN access through internet is not as easy as it looks like and eventually I don't know if you guys have been experimenting with post security with the recent version of OpenShift but it can be quite tough if you try to run something so just to remind you what you can explore in terms of initiatives inside Red Hat so there's a healthcare validated pattern so I don't know if you heard about validated pattern this is dedicated to basically analyzing medical images and providing better diagnosis to doctors this is all done through OpenShift data science that I mentioned before and basically what validated pattern is is really just a way to just a POC code that you can re-execute it's stored in GitHub so you can't contribute and it's designed by the industry and by use case basically. So yeah just as a recap of the whole thing we're just putting this effort into using open source for assisted assisted care so hopefully when it gets hard time to retire we will have a happy and healthy retirement. Question? Yeah, sure. Yes, the idea is to run it there as you probably there were a couple sessions you can even run MicroShift if you have limited hardware resources it's given the workload we are working with it will run on MicroShift as well. So actually there is, so if I understood correctly the question is where do you get the data for the model to be precise right the following model to be so actually there is quite a dataset already available I didn't have to train it myself but with Edge Impulse you can even train it yourself and add your own data to the model quite easily. We just wanted to just incorporate as many Red Hat products that we could given the title of the tool is just using Red Hat technologies so that was one of the main reasons behind using EDA. Yeah, I guess also the advantage is that you can automate other stuff so you can leverage this for future use cases as well. Yeah, so actually the output is cloud events which is a standard so you can trigger events stuff on AWS I guess or other platforms.