 Hello, everybody. Anthony Bartolo, Senior Cloud Advocate at Microsoft, and thank you for joining me today. This is one of my favorite sessions to deliver applying AI and IoT to save lives, how a team at Microsoft made drones self-aware. What I love about this most is that we're going to walk through real-world scenarios and implementations of machine learning, cognitive services, IoT. We're not going to go, this is what developers do, this is what IT pros do. We're going to talk about what we do together as an organization, whether it be a small systems integrator, whether it be a development shop, whether it be an organization that's trying to make heads or tails of technology and services that are out there, and how they address opportunities or problems with technology. We're going to show, like I said, real-world examples and implementations, and we're also going to share source code, and if not source code, steps on replicating a lot of the demos that you're going to see today. So let's get started. All right, apologies. Let's get started with the presentation. So before we jump into the technologies that are available out there, we want to start with, how do we get to this point? How did we get to this point in terms of services that are available for us to implement? This was a journey that's taken over numerous years in terms of where we're at today. Previous to computing was a destination. You would go to your den or your computer room or your office. You would have access to a computer that would be sitting on a desktop. It may or may not have been connected to the internet or, at that time, BBSs or forums that you would dial into over a telephone modem. My first computer was bought at a local retailer here in Canada and was bought off the shelf, was the Commodore 64, and connected it to the internet via telephone line. Gaming was pixelated. It wasn't interactive. It was singular. If you did have a multiplayer, you had the ability to connect via your computer itself and have two joysticks to be able to play. Gaming on phones is nothing like what it was today. I remember playing Snake on my Nokia device, and that was the extent of graphics. And even better still, in terms of communication, there was no real correlation between communication and connectivity at that time other than telephone dial-up through a 9600 BOD modem. The whole aspect of the mobile phone being released, it was a big, bulky unit that you would have to carry around and it wouldn't fit on your hip, or if it was in the vehicle, it was hardwired into the vehicle. These are the things that where computing was at back in the day. And then this happened. And so when the BlackBerry 950 was released, the beauty of it is the device provided you information at your hip. And it's all about the information. That's the biggest piece of this, right? It's the data. Let's set aside the fact that this has a QWERTY keyboard and it ran off a single AA battery, and it had three days worth of connectivity. The big thing with this device was the ability to have your email at your fingertips at all times. I don't have to go to a computer anymore to gain access to my data. This was powerful. This was something where it changed the perception of what computing was and what we could start doing. And this is where the catalyst of the capturing of data started to occur in terms of what if information could be made available everywhere? What if these devices could provide us indications in terms of the best routes to travel or the best restaurants to go to? Or when is it time to do maintenance on a large scale machinery? This is where it really changed in terms of paradigm in terms of what is computing and what is services? And what can data actually do to better the lives of everyone? So a couple of quick stats I wanted to share with everybody. In 2018, there were 7 billion IoT devices connected. So we went from this BlackBerry 950 to now 7 billion IoT devices collecting information, collecting data. In 2020, it was 32 billion devices. So you can see in two years' time, the extra mental growth that's happening with regards to this scenario of now we're growing to a point where we're capturing all this information. What are we doing with this information? It's the biggest question. There's estimation by 2030, 80 billion devices are going to be connected, providing us information about the world around us. It's something where all this data is being captured about everything that we do and everything that we experience, the way we drive our cars, the way we experience concerts, the way we get medical care at hospitals, you name it. All this information is being correlated and captured and what have you. And what's interesting is that 80% of the world's data was captured within the last two years. For all the technology that we've had, for all the abilities that we've had, in the last two years, 80% of the data that we've captured for around the world has happened in the last two years. Because of the evolution of everything that's going on, because of the connectivity that's being available over cellular, over landlines, you know, live streaming, I would say, what, three, four years ago, it's difficult to do something like this over a live stream and now we're able to do this over residential internet access. The times are really evolving really quickly and so it's up to us to ensure that when we do things like this, we really pay attention in terms of what we're trying to accomplish and we do it in a responsible way and we'll talk a little bit about that in this presentation as well. So here's the next big juncture that we're at. It's artificial intelligence and everybody has this on their tip or tongue in terms of what do I do with artificial intelligence? I like to say, I wanna rub a little bit of artificial intelligence on my product and it's gonna excel, right? That's not necessarily the case. A lot of people see artificial intelligence as the next greatest money-saving solution. By bringing artificial intelligence to my organization, I'm gonna be able to save money. A lot of people talk about artificial intelligence taking over people's jobs. There's a lot of perceptions in terms of what artificial intelligence is. But what I wanna talk to you about today is, let's see how artificial intelligence is included in the examples that I'm gonna give. I don't want it to be the main premise of what we're trying to accomplish. It is interwoven into the stories that will be shared today. I wanna showcase how in focusing on the problem or opportunity first is where artificial intelligence then comes into play as a tool to address that opportunity as opposed to an end goal or a focus point as to where I wanna go. So when we talk about capturing of data and we can talk about what we're gonna do with data, this is the first topic that usually comes up when I have these type of discussions, right? It's I need to connect this to this. I need to get information from that. We've heard situations in terms of connecting the IoT devices to cows to understand when's the best time to milk them to produce the most milk or connecting IoT devices to buses to understand what's the best travel patterns in terms of avoiding traffic and making sure that there are there stops on time. You hear about these discussions all the time, but one crucial item to this is always missed in regards to this scenario. It's the people involved and that's huge. You have to have the business decision makers, the developers and the IT professionals all come together in terms of these opportunities and understanding the problems as a whole. Don't just add technology for the sake of adding technology. Add technology for the sake of understanding the problem, understanding what everybody's ideas are and how they need to face the problem in their specific way and then add technology as a tool to address the opportunity for the problem instead of running with I need to install this at my organization so I can make more money or save more money or be more proficient or whatever that goal is. Do it from the flip side. I need to save money in this organization because we're spending a lot on X and here are the tools that are available to me. Let's hear in terms of everybody's opinion and everybody's need in terms of this type of implementation and let's add that as a tool as part of our plan as opposed to having that as our focal point which is very important. So I'm gonna share with you one of my first projects that I implemented with IoT and I wanna show how this naturally evolved into including artificial intelligence as opposed to starting at that point. So a long time ago we were a group of five of us were put forth with a challenge in terms of take some everyday objects, everyday ideas and try to connect them and try to start capturing data for a specific purpose. It wasn't a go and do an IoT project, go and do an AI project, it was get your creative juices flowing and try to solve a real world problem with technology. So the group and I sat around, we talked about opportunities in terms of what we could accomplish and the facility that we were at was a big warehouse and it was something where we had thought of, I wonder if they ever have issues with vermin like rats and mice and what have you and we walked around the building and lo and behold we saw traps, these big black boxes that the mice would go in and not come back out and it was something that was an eyesore and we started walking around to see how many of these that we could actually find and they're not pleasant to look at, they're not, and we talked to the pest control company that services these traps. They come out at regular intervals to eliminate the vermin that are trapped in the traps. Can you imagine at a restaurant to have these boxes around and just sitting there and could be full, could be empty with mice or what have you, it's something of concern, right? So understanding the opportunity as a whole was very important for us. It was all about, understanding that vermin when they infiltrate an area they can bring over 35 diseases. The traps, they're cleaned out at intervals not if they're full or not, right? So this is something where, wow, you have all this going on and what are you doing to be more proactive in terms of cleaning out these traps? What are you doing to be more proactive to getting rid of the vermin at that location as opposed to just, hey, it's Tuesday at three o'clock I have to go out and clean out these traps, right? The other aspect to this was the revenue opportunity with this, the industry for pest control is $12 billion annually and it has an annual growth of 3.1% year over year. So it's a huge opportunity in regards to if you are more proficient at clearing out these traps and having that ability to go forth and make your customers much more happier implementing a solution like this might be the ticket to do that. So we connected these mouse traps and it was just literally a $2 mouse trap I have it here via local retailer connected it to a Raspberry Pi and what it in essence did was it captured information in terms of how long until it caught a mouse and how long until the trap was cleared of the mouse. And that was our SLA marker that we would have is in terms of the time that it would take for someone to go and clear out the mouse trap just to let you know, no mice were harmed in this scenario when we did an actual test of the deployment, we did the trap where it would be a trapping them inside of the box as opposed to having the device snap on them which was a lot more humane. In that scenario there, what we were able to do then was to put the mouse traps out in the vicinity and test out, hey, these traps in this end of the building or this end of the room were catching a lot more mice to the other end of the room. So we only had the information at the time in terms of when the trap was set off and when the trap was released or emptied. What if we were able to capture more information? So we started to explore, we can capture the level of light that was there or the temperature around the area. And we took that information, we correlated it into machine learning to understand the variables of, where are the best places to put the mouse traps inside of a building? So we tied it into the whole aspect of machine learning to understand, okay, in this room here, if we put the mouse traps in these areas as been told of us by machine learning we'll catch more mice. And we were actually able to then report in terms of how the mouse traps are doing via Power BI and correlating that data to better understand what is the travel pattern of mice throughout this room or throughout this building? So and put out these mouse traps to be more proficient at catching the mice. And then let the pest control companies know, hey, I need to put these traps in these certain areas to catch the most mice because when I put a trap here it's more likely to catch the mice. And having it proficiently then notify the pest control company that say, hey, 60% of your traps are full, 70% of your traps are full, 80% of your traps are full, you should come clean them out. Even predicting in terms of the time to catch 80% of mice so that the pest control company can be proactive as opposed to, oh, it's Tuesday at 3 p.m. I have to go and empty the traps. They can know that, hey, in about three hours I'm gonna have to go to this location to empty the traps. And so their customers are much more satisfied to understand that, hey, the pest control company is coming out when the traps are gonna be full as opposed to being at a specific interval and then these traps are no longer working because the traps are full and they're not catching any more mice. So it was an awesome experiment to run through. Here is all the information that pertains to this solution. We've made the entire solution open source. And what I love about this and why I love sharing this presentation is every time I've shared this presentation the solution has evolved. Yes, I've seen a lot of implementations of the mouse trap scenario and this type of solution, but it has evolved to other facets of businesses out there based on the mouse trap scenario and architecture which is really cool. So let me show you one example. As mentioned, the key to this was the evolution of the solution from just knowing when the mouse trap was full and notifying us when the trap was full to add the addition of machine learning to evolve the solution, to make it that much better, to make it that much proficient at not only catching mice, but to be more proactive with emptying out the traps so that the traps are always available to catch more mice. So with the inclusion of machine learning, a lot of organizations that we've talked to have taken up this solution and have implemented it in different ways that we find unique. And one of the stories I like to share is a sweet one. It's ice cream trucks. These are the ice cream trucks that travel around Toronto. I know parts of North America probably have the same trucks around the world. I've seen trucks like these in Toronto specifically. These are the ice cream trucks that we see every year during the summer traveling around. And the challenges that they have are how are they more most proficient in terms of selling them ice cream, right? It's hard. There's cost in terms of the gas. There's cost in terms of the operation of the ice cream machine that's inside of the truck. There's the individual that's working 15, 16 hours a day trying to make a living. It's not an easy feat. Using the same architecture that we have with the mousetrap, they were able to do the correlation of, instead of the mousetrap being set off or having a certain weight, notifying that they've captured a certain amount of mice, they've actually taken the trigger and put the trigger with the song or the chime that the ice cream truck would play when it travels through a neighborhood. So this is the interesting piece with this. When the truck is moving slowly, because there's a speed factor involved as well and the music is being played via the truck itself, that is the trigger to say, well, the truck is in motion. It's not really servicing its customers. It's just trying to draw attention to itself. And then when the truck stops because it sees that there's individuals there, the music that's turned off, that's the trigger that then calculate how much time is this individual with the truck staying in that specific area and servicing customers and doing that calculation. And in future, they're gonna correlate that to sales as well in terms of how many sales are actually happening at the location as well. And they can now pinpoint at what times of the day is the best places to be at certain locations to sell the most ice cream and at what days. So you might have this, so here's a hotspot that we have here in terms of the Toronto area in Burlington during a certain time. As you can see, there's a star there to go there during a certain time to sell the most ice cream and then travel across the bridge to the other side into Hamilton and do the same thing and have that scenario. This makes the ice cream trucks more proficient in selling of ice cream and doing so in the least amount of expense in terms of gas and in terms of travel time and allowing the vendors that are selling the ice cream to be more proficient at doing so. And this is huge in respect to, if they're more proficient in selling ice cream, it costs them less to operate the vehicle itself. They make the income that they're trying to make and more people are happy because you get ice cream. You're always happy if you have some sweets to gnawing, which is awesome. But what's interesting about this is this is based on the mousetrap solution. It is the same architecture infrastructure, just customized in a different way, using different type triggers, but same logic, which was really interesting. And I love seeing how these solutions evolved to do something more than something better beyond what we initially started in terms of the solution. That's why I love the whole open source aspect of technology in that people can evolve things to something more. So another project that we did, again, this is based on the learnings that I've had and the experiences that I've had was working with Toyota Canada. And one of the challenges Toyota Canada had was, they wanted to be more proficient at understanding what parts in a vehicle could break down in terms of being more proactive with regards to recalls on vehicles, recalls on parts. And so in this scenario here, we actually sat down with the team, they had 300 locations across Canada in terms of shops that would be providing information manually to Toyota Canada, and they would capture this information and they would massage it as best they can. It could take them weeks to go through as much data as they were capturing on specific vehicles. And it was interesting to see the exercise in the fact that only information was being ingested in through access, which was crazy to see in the amount of data they were receiving. And this was an interesting endeavor in terms of let's take the data, let's put it into a much more massageable database like SQL and Azure, and then have the exercise to understand the variables that a part could break down in. So it could be the weather that's out there or the road conditions in the specific area. Taking that information, correlating that open source information in terms of road data, weather data, what have you and correlating that to the parts inside of a vehicle and the percentages of failure that could occur based on the stress of those parts inside of the vehicle. This was in a very interesting exercise. We were the catalyst to start this alongside Toyota Canada. I know they've taken it a lot further than what we started with and respected us. But what I love about this is we've taken these learnings and we've created a lab of this, you know, and this is if you're trying to get your first foot into the door in terms of machine learning and understanding the possibilities are there. We worked on this with Toyota Canada, not on part failure, but it was on pricing of vehicles, which is an interesting endeavor to go through in terms of if the vehicle has X, Y, and Z in terms of parts or features or functionality, what should be the cost of the vehicle that's going out there? So if you go to aka.ms4 slash autolab, you'll get a full run through of a lab that you can actually understand. And again, this is extended from the learning of the mouse trap into the solution that we've done with Toyota Canada to understand how do we ingest the data? How do we massage the data? How do we train the model? It's a great step I set to go through. I use this, you know, in regards to teaching others and the adoption of machine learning and to see it from a perspective of how am I doing this to better, you know, put forward my organization to address opportunities and getting those ideas in terms of how that can be implemented in organizations, which is key. So we talked about, you know, the whole aspect of the ingestion of information coming in from IOT devices, going up to the cloud, having that scenario where you have this connectivity that can push that data to the cloud and then be digested through solutions like machine learning for the incorporation of artificial intelligence to have an outcome of X, whatever X may be. But what happens when you have a challenge that the device itself cannot be connected, right? Or it can only be connected during certain intervals. You have, you know, there's things that you have to take into consideration when you're looking at these type of opportunities or solutions where, you know, there might be a cost involved or a huge cost involved in terms of the information being pushed up to the cloud with as much data as we're capturing now from these devices, does it make sense to send all 100% of the raw data up to the cloud to be digested? Or should we look at a scenario where we incorporate edge computing to have a base understanding of the data that we're capturing at the endpoint, which could be the IOT device or mobile device, you know, smartphones, you name it, capturing that information. We're now seeing the utilization of compute cycles coming right at the endpoint itself to do rudimentary calculations on X, whatever that may be. So in the case of the mouse trap, you know, understanding that the mouse trap has been set off, initially what happens is the trap would set off would complete the trigger. It would push that information, that raw information to the cloud, and the cloud would say, yes, it was set off or no, it wasn't set off. And then so it would do the timer in terms of when it was set off. In future iterations that I've seen the solution, I've seen that high school students are deploying, which is awesome. They're actually doing on the edge. So they're actually having the Raspberry Pi report out to say, yes, the trap has been set off. And this is the variables around there, you know, this is how many mice have captured and pushing the finalized information up to the cloud for calculation. This is beneficial in ways that, you know, now you have this scenario where I don't require 100% connectivity. So that saves a lot of costs. The device are a lot smarter on the endpoint so they can be more proficient in doing the tasks that they're doing. And it allows me to have cleaner data. I don't have all this raw data in terms that I may not need all the stuff that I have to filter out before I do my machine learning exercise, which is awesome. So I wanted to show you an example of this that our team undertook. And this is one of the boats on the Canadian Coast Guard fleet called Henry Lawson. It's a large boat. It goes out to sea, you know, to help in search and rescue boats that are in distress, but the people that are overboard, you know, crucial, crucial solution that we have here in Canada. And, you know, the same solution is available around the world. The big challenge here is, you know, there's only so many boats that we have. And Canada has a large line mass, you know, we're on two sides of the oceans and it's the whole aspect of having enough resources to help save as many people as we can. And how do you become more proficient at that? And so this challenge was put forth to ourselves and the team out in BC called Indo Robotics where could we use artificial intelligence to aid in the responsibility of search and rescue, right? So, and put forth this question to our teams to come together and come up with a solution. This is where, you know, the drone aspect comes into play. Working with Indo Robotics, what we were able to do was to teach a drone to identify when a life jacket is in the water and to ensure that a body mass is actually inside of that life jacket. So that's an actual individual in the life jacket. So what had happened was the drone, you know, from a testing perspective and there's, you know, regulations and rules required in terms of the flight pattern the drone would have to take, the altitude the drone would have to fly at, the type of life jackets that the drone would have to recognize, you know, all this information was crucial to ensure the success of this endeavor. The other aspect to this is that these drones fly up to three hours out at sea. And, you know, those scenarios, there's no connectivity, you know, cellular doesn't even have that type of connectivity. Yes, you could do satellite, but the cost of including satellite in those type of scenarios was immense. And, you know, trust me, when you're trying to save lives, you know, costs doesn't matter, but we also wanted to make sure that, you know, it would be feasible for organizations like the Canadian Coast Guard to deploy a plethora of these devices as opposed to only a few because of budgetary constraints that this allowed them to do so. So the piece here was having these drones, you know, these are gas powered, you know, full size drones going out three hours at sea to a distress signal and then surveying the area. So, you know, there would be a manual pilot that would, you know, fly these devices out there to survey the area, capture the film, and then fly back to the central office to have the film analyzed. And so the challenge was, how could we make the drone more proficient in understanding the data that it's seeing as opposed to just simply flying it out there, capturing that data and flying it back to have the tape reviewed? You know, every second counts, especially if somebody's overboard, hypothermia could set in. These are a lot of the scenarios that you have to take in consideration when you're trying to save lives. And so the team that you see here, you know, ran through the scenario where if we can build the intelligence into the drone to identify that individuals are actually off the boat and in the water, then we're in a dire situation where, you know, minutes count in terms of getting the support out there and getting people out there. So in this scenario here, it was 500 man hours of having this drone go out there and film the scenario and identify when a life jacket is in the water. And when you have that scenario when a life jacket is detected in the water, it needs to be more than that. It needs to be a scenario of, so life jacket is there, okay, identified. Now is there an individual inside of that life jacket, right? And that's where the IR scan came into play. And this was big in terms of the learnings that we had because remember, these drones had no connectivity. They were flying out there, being remote controlled, going to a specific location, capturing this information. And now understanding what they were seeing when they were out in the water and understanding that, hey, there is a life jacket overboard. Hey, there's an individual inside of that life jacket capturing the environment as well as the next step and understanding, you know, is it cold? Is it hot? What's going on? And then doing a rudimentary calculation in terms of with the four individuals in the water, time to contract hypothermia is X amount of minutes, hours, you name it. And this is how much time, this is the window that you have to go out and help these people get out of the water and become safe. It was a very interesting experiment. We learned a lot from this experiment. But what I loved about this is that, you know, and they're full right up of this experiment as well is available at aka.ms4 slash drone AI is the fact that, you know, we've shared our learnings with everybody and everybody's now taken advantage of these learnings and grown the solution. But I want to actually show you the cognitive solution scenario that we did in terms of the drone and how we actually taught the drone through the 500 people hours of how they actually, you know, how the drones understood what a lifejacket actually looked like. So let me go through the quick demo for cognitive services and edge computing. So here we have the Custom Vision AI workbench. And this is the solution that I use to teach custom vision in essence, artificial intelligence, understand objects, right? So in this scenario here, we start off going to www.customvision.ai and we start by adding images. Now in this scenario here, we're not gonna do the lifejackets. We're gonna do something that's around a inventory of hardware store, right? This is, you know, a challenge, especially that we're in right now, we're only limited to so many people that could be inside of an enclosed area. How do you do inventory with the fact that you have less people? Can you take more time to do the inventory? You know, how can we utilize technology to help in these type of scenarios? And so we're gonna do inventory on hammers. And so first thing we need to do is we need to teach the services to understand what a hammer looks like, right? Because hammers come in all different shapes and sizes. We know it's a blunt object at the top, usually metal, sometimes rubber, if it's a mallet. And then there's a wooden handle that you would hold on to. So these are the images that I brought up through Bing imagery and then saved them into a repository and then manually inserted them one by one, I know. But I'm gonna show you a solution at the end of this that can actually quicken this process. For this scenario here, taking of all these hammers, putting it into the solution and then teaching it by tagging each image as a hammer. And then adding a wrench, right? So we know that wrench, they could be adjustable, they could be a fixed format, different sizes, usually all metal in regards to what they're done. Trust me, I've a lot of times used a wrench as a hammer if I didn't have a hammer handy, but definitely there's differences in terms of the makeup of the wrench between a wrench and a hammer, different sizes, different shapes, different looks. All this being put into the customvision.ai solution, the workbench to have an understanding between the differences of the hardware. And then training the model. And this is important, right? So now that we've tagged everything, we have to teach the services to understand the differences between the hardware so that when it's doing a deduction of what it's looking at, it can be specific and sort of, yep, this is a hammer and this is a wrench. And so you go through iterations, right? Interations of learning going through, do you wanna get as high in terms of a percentage of reliability of your data, reliability of the recognition of the imagery that's there? And so in this scenario here, the recall, the precision of understanding the data is at 100%, the recall was at 96%, which is very high. Again, this is this very small demo. In the case of the drone understanding the lifejackets in the water, like I said, it was 500 people hours in different types of visibility during the day, different types of weather patterns, different types of water. Sometimes the ocean is blue, sometimes it's green. It affects the color of the lifejacket. Even the lifejackets themselves, here's a lifejacket that's brand new, the bright orange, and here's a lifejacket that's 10 years old and it's faded, right? There was a lot of capturing of information that had to occur and we actually had to do it all manually. It couldn't be automated for that scenario, only because you had to do it in a real world implementation. You couldn't do it via imagery based on the regulations. And this is important too. Remember, it's not just about the inclusion of technology for the sake of including technology in an opportunity. You also have to abide by the rules and regulations that may occur or may be required for the operation or opportunity that you're addressing. So in this scenario here, now that we have the recall that we want, we're actually going to export the learnings. And this is where the edge computing piece comes into play. Traditionally, you would have a scenario where the raw data would be sent up to the cloud and the cloud would do its deductions there. But in this scenario here, we're actually going to go forth and export the data. And as you can see from the solution, there's a facet, sorry, there's a plethora of solutions that you can actually export to. Everything from IoT to TensorFlow to Docker file. We're actually going to export this into an Onyx file, which is an open source decision-making solution made available. I'm going to show you why in a second. But what's powerful about this is the amount of choice that you have. So now that you've created this base learning file to understand the differences between a hammer and a wrench, you're taking this and you're exporting it out to make a device semi-intelligent, to provide it with the awareness to understanding, hey, this is what I'm actually looking at from a scenario of hardware between a wrench and a hammer itself. So I'm going to export out this Onyx file. And I'm going to open up my application. Now, this application here is in Unity and it utilizes a device to understand what I've taught and what I've learned, sorry, inside of the Custom Vision AI workbench, taking that model and inserting it into the application. So something like a HoloLens can understand what it's looking at. Now HoloLens traditionally we're using in scenarios where you have holographic images in a mixed reality scenario and seeing those imagery inside of the real-world scenario. But how about utilizing these devices to understand what they're seeing? So in this scenario here, from this project with the wrench and hammer identification, we were able to do up to 75 objects being identified by the HoloLens itself, which is huge. And this is obviously not a drone. I would love to showcase the drone for you, unfortunately, in the area that I'm in right now. I can't fly a drone. But what I love about this solution is how extendable it is. We talked about this type of availability on the drone, now ported it over to HoloLens. And there are a plethora of other devices that you can port the same scenario, same solution over to. Again, this is why I love sharing these projects because you are the person that will bring this to the next level or do the next thing with this type of solution based on the opportunities and problems that you need to address. And to help you with that, this is my GitHub repo that I've captured a lot of the projects that we've done as a team. As you can go forth in aka.ms4 slash wireless live, replicate, run through the code, provide me with suggestions in terms of how to make solutions better. I'm always eager to learn from you in terms of what you're trying to accomplish and how I can provide you with resources to take it to the next step or do something more with the solutions that are out there. As mentioned, you know, I've done it in a scenario where it was automated, sorry, in a manual way where I'm going through the custom vision.ai workbench to ingest all this information. I wanted to share, Cassie has a great write up which can be find at aka.ms4 slash 10 minute, sorry, 10 minute ML model that will actually take imagery from Bing and ingested. So if I want, you know, I'll type in hammer and I'll type in wrench and it will automatically populate for me into customvision.ai to do the training on my behalf so that I don't have to go out and manually capture that information. You know, she offered this up as a solution to solve a presentation said, why are you doing it the long way? Why is it taking you so long? This is what I love, right? I love that people are willing to help and people are coming out and growing the solution. And I want to share that with everybody to take the ideas to the next level. So if you can do it, the way I just showed you in terms of the adoption of custom vision or Cassie's model is, which is awesome, which automates a lot of the solution in terms of capturing information, especially if you have a plethora of hardware that you have to capture, you know, we just did a wrench and a hammer and getting screwdrivers, you got crowbars, you know, you name it in terms of tools that are out there. This is a much better and much quicker solution unless you have to do it manually, like what we have to do with the drones, that capture had to be done manually because it had to be done in a scenario where, you know, understanding where the life jacket was and different weather patterns and what have you, it's hard to grab that imagery as well as regulations required it to be live training, which was another thing that we have to address. So make sure when you're going through your solutions and your scenario, you're abiding by the rules and regulations that are required as well. So as mentioned, you know, we showcased this on HoloLens, we showcased this on a drone, it doesn't stop there. In essence, any device that can capture imagery can understand imagery, it can be utilized in the scenario. So this is the Azure, Azure, sorry, Azure Connect Kit, that was a mouthful, the Azure Connect SDK that's available that you can acquire, it can capture information in IR, it can capture information in real-time imagery. You know, the whole aspect of it understanding what it's looking at would require compute functionality, so you would have to connect this device to a central processing unit to understand and go through the calculation on the edge in terms of what it's looking at, or if you just wanted to have the, it has a standalone funneling information up to Azure, you can do that as well. But I wanted to highlight this in terms of, sky's the limit in terms of creativity, what you could do with the solutions that are being shared. It's not just about, hey, go out and get this device and replicate, it's grow on this. See what opportunities can be addressed with this. If you're stuck at a problem right now, how can this be used as a tool inside of the solution that you wanna put forth to grow and, you know, I usually find out when you're starting to address a problem, it grows, it evolves as you go along in terms of addressing it, and that's fine, and that's great, and that's how you learn more in terms of implementation of use of the technology. And the hope is that you share what you learn with the world so that everybody can take advantage. If you wanna look into this kit, the URL is aka.ms4 slash azureconnectdk, and this will give you the ability to not only acquire the devices that you require, but also a lot of the templates that are used in terms of understanding what's going on with a solution that's being put forth. So I wanted to showcase another solution in terms of the same type of implementation with the drone. In this scenario here, Group in Australia are actually using drones to do understanding of the soil moisture or saturation of the soil inside of the farmer's fields. And this is big, right? If you have scenarios like droughts that are happening, you need specific water sources, you wanna make sure that you're yielding the most amount of crops. I know that tomatoes need more water than corn, right? Understanding those different data points and levels is a big deal, and it's the whole aspect of being precise in terms of your water or moisture measurement. These drones will have that capability to do that and have the automated response to the irrigation system to say, yep, you're gonna have to turn the irrigation on because the soil is not saturated enough for the crop that's there. This is huge, right? It's the whole aspect of not replacing jobs. It's about yielding much more food, better food, making sure that the plants have the adequate resources to grow and to be optimal in terms of what they produce, right? And this is, again, what I love about this story is it's further utilizing the drone technology, further utilizing the cognitive services or custom vision AI capability and new way to ensure that it addresses an opportunity and provides more opportunity down road in terms of producing more food for the world to consume. Back to the haul lens, and again, this is us breaking off the initial drone scenario in terms of other opportunities that are there. We talked about this, right? The haul lens is the mixed reality implementation where I can look at holograms that interact with the real world. What I loved about this solution was you have a scenario where piping is behind sheet rock or drywalls, we say here in Canada. And you don't know what's happening in terms of piping. Could be water pressure issues, it could be a leak somewhere. And traditionally what there is is you'd have indicators or you have a reporting station, especially in large warehouses. Hey, this is what your water flow looks like. How do you address this? It's done manually, right? Hopefully there's a service door that you can go through to adjust the pressure as required through the valve. We work with a group out of T4G to understand, hey, could we automate this type of solution from a scenario where as somebody that's going as an inspector, wearing the haul lens, walking through an office building, warehouse, what have you, understanding what the water flow was through the piping. And then taking it a step further to actually say, hey, I need to release pressure on this valve and I need to open it, doing it via hand gesture. Remember the haul lens is a new interface that you have to interact with data. Similarly to what you would do on a smartphone or on a laptop or a computer, but you have that scenario where the device is being worn, you're seeing the data in real time in front of you mixed with real world. We've seen games and type of interaction. We've seen support from a technical of working on a vehicle, working on an engine, having somebody support you. It's a Dynamics 365 type implementation. In this scenario here, you're actually walking through a building, looking at the pipe work in a virtual sense behind the sheet rocker drywall and then having actuators on the valves to open and close as required due to pressure levels that you can see in there in the report. That's huge. It's another interface that you can now utilize. And in some scenarios, what I've learned from specific projects, the inspector actually has to be within a certain circumference of the machinery or the pipe that they're looking at. So this solves that problem as opposed to bringing this laptop out or bringing a smart phone out to do the adjustments manually on the device. You can actually do it via the haul lens and see what's happening behind the walls without breaking the walls or even going to the service door, which is huge. The last piece I want to touch on is the ethics of AI. And this is a very hot topic in regards to all the solutions and scenarios that are out there. Remember, we talked about this earlier. The addition of technology shouldn't be the endpoint, shouldn't be the focal point of what you're trying to accomplish. At the end of the day, it's the relationship of people and the opportunities that you're trying to address on their behalf, which should be the focal point. That means that you have to take ethics into consideration. And this is a hot topic because with all the information of capturing data in AI, it's the whole aspect of there's a lot of data, should we be capturing that data, right? I know there's a lot of scenarios in terms of facial recognition that becomes a challenge in respect to having this type of data that's out there. You gotta be mindful of that. And this is a scenarios that you have to take in consideration when you're deploying these type of solutions to ensure that it meets not only regulatory compliance, but it's acceptable by the people that'll be around when these types of solutions are rolled out for that type of implementation. So I want to share one last project with everybody in regards to a solution that we work with with an organization named home accept. And the challenge that they had was, the population is aging and we're in a scenario where there's a group that's out there right now is at a certain age. They want to live in their homes. They don't want to go to facilities. They don't want to go to these scenarios where there's multiple people living in a complex. They want to be independent for a long time. And in those type of scenarios, there are tools that are out there like medical necklaces or bracelets that you can wear that if they're in distress, they would press a button and somebody would come out and help. The problem though is that a lot of times they won't want to wear the device or they'll forget to wear the device and they've seen the scenario happen a lot. And then when they're in distress, if they've fallen or they've gotten hurt and they don't have the ability to call, how do you ensure that support is put out there for them? So this was a big challenge in terms of the ask of how do we address this problem? So initially, the conversation was, okay, well, we'll put cameras in the home and the cameras will constantly monitor this individual. That's a bit evasive, right? That means that you now have this camera 24 seven in your home with somebody monitoring that camera or artificial intelligence monitoring that camera. And that becomes a challenge with ethics is that something that's responsible to do is that gonna be something that's gonna be accepted by the general public and having these cameras. What about from a security perspective? There's a lot of things that you have to take in consideration when you're doing that type of implementation. What the team actually came back with was, hey, what about instead of having an optical camera inside of the home, we implemented an infrared camera and having that information and understanding that the individual, you're not seeing their face, you're not seeing where there specifically are inside of their space. But if you're doing it from an IR camera and understanding the heat mass traveling throughout the house, now you can be more proficient in understanding that, hey, this individual hasn't moved during this certain period of time, is there a problem? And it's understanding the patterns traveled throughout the house during the day. This is the time that they sit down and watch their favorite TV program. This is the time that they go and feed their pet. It was more beneficial to have this type of scenario through IR that safeguarded the individual's identity and imagery of that individual from the aspect of understanding the living patterns of that person inside their home. And so this became proficient in terms of dealing with the situation of having cameras inside of the home. Having IR was much more accepted because your face is not actually showing up anywhere inside of the data. Obviously, it had to be accepted by the individual living at the home. But the benefit here was the individual didn't have to do anything. So that individual, if they came into stress, they would stop moving and the system would kick in and say, hey, I've noticed that you haven't gone seeing your program, your program's on right now. You're still in the same spot. Is everything OK? Notification can happen on the smartphone to the individual themselves if they don't respond within the specific period of time. They're loved ones. Or if they have the services that are monitoring this solution can be notified. The benefit of this is, again, it's not a camera in your face. It's not capturing your facial information. It's capturing your heat mass, you as a heat mass, as you walk through your home. And being proficient in, I don't have to wear a necklace or a bracelet to call emergency services. The solution itself can do it on my behalf, which is huge implementation. Still in testing out in Eastern Canada that they're looking through this type of solution right now, the information on this project, again, available, aka.ms4 slash home accept. What I loved about this was, again, it wasn't about the inclusion of technology as a focal point. It was how do we responsibly utilize the tools that are out there that does not invade into an individual's privacy, but still enables our assistance of that individual when they're in distress? One of the last things I want to share with everybody with here and really drive home is of all the technology that was showcased and highlighted today, it's all based on your creativity. The individuals that go forth and look through the examples that we shared with today really take it to the next level. And I really appreciate that from a perspective of learning. I love to see how you go forward and really evolve a lot of the solutions and implementations that have been shared with you today and take it to the next level. And it's something that I then take away and learn from how this is being implemented and how this is being put forth. I love the fact that the creativity that's there doesn't limit it to just the mousetrap freeing being for mice and being extended to ice cream trucks and newspaper boxes and what have you, in terms of the drone looking at agricultural scenarios as opposed to just saving people in distress. I love the fact that you're taking these scenarios and these projects to the next level. Continue to do that. You are the focal point to all the opportunities that are out there. It's the people that are involved. The technology is there. It's an important tool, but it's not the focal point. The focal point at the end of the day is you and your creativity is what drives that forward. So if you have any questions or if there's any concerns that you have or not able to gain access to any information, feel free to reach out to me at Wireless Life on Twitter. Happy to field your questions and share information with you. And with that, there's also the Microsoft Learn modules that are available, free resource that you can go forward to. I've actually created a collection specifically on IoT Visual that will run you through scenarios of IoT and scenarios of custom vision that you can implement in types of solutions for learning. So again, this is not a focal point, but if you're looking at adopting these type of scenarios and services, do check this out. It's a completely free resource that you can go forth and get some comprehension around the services that we talked about today beyond the source code that's available. And that's it for me in terms of my talk. So thank you very much. And again, if you have any questions, please do share them with me on Twitter at Wireless Life. Happy to field your questions or leave them in the comments from the video. You can see that it's here available in the comment section and we'll address those as well. With that, thank you very much again for joining me today and we'll talk to you soon.