 My name is Matt Halliday. I'm a business development manager in the Office of Technology Commercialization. I'd like to welcome you here this morning for the tech showcase. We have a nice set of presenters this morning from PIs and startup companies as well. The rules for the morning for this section will hold all questions till the end. There is some time afterwards for some networking, so we'll just keep the presentations rolling. Please hold your questions till the end after the drone show. There'll be approximately seven presentations, and then we'll actually go outside for a drone demonstration, and we'll talk about that once we get there. But I'd like to introduce our first speaker, Professor Jeff Siskind. Thank you. It's live streaming through the camera. Okay. Imagine you're driving in a car trying to find a new building like the conversion center, and it's so new that your GPS doesn't know about it, and you can't find the building. How do you find it? Well, you ask a pedestrian for driving directions, and they tell you and you follow the driving directions. Now imagine you're a self-driving car. How are you going to do that? Similarly, imagine you're a robot in a new building on campus, and you have to find the washroom, or you have to find the printer. You ask people and follow directions. That's what my research is about. This is joint research with a collection of graduate students, and I'm going to show you three systems today that know how to engage a human in natural language dialogue, ultimately speech, and then have a robot, a physical robot follow the directions to get to its destination. The first system I'm going to show you, we built about six or seven years ago, and the crucial thing about this is that it understands complex directions and understands every single word in the directions. So I'm going to change just the single word, and the robot's going to follow a completely different path. To the right of the chair. And now, I'm going to change the word to in front of, and it's going to follow a completely different set of directions, the same robot in the same environment, changing one word, and it goes along a different path to a different destination. And I'll change yet another word, left of, and it goes on yet another distinct path. So it has very deep understanding of at least this subset of English and know how to follow directions. Now I'm going to show you a far more sophisticated system. And you see here the internal representation of the driving directions that the robot obtained from a dialogue in speech with a human user. And now it's executing those navigation instructions in an environment that it's never seen before. So now in this case, the crucial thing is that the robot engaged in a multi-turn dialogue with the human. It had to find a room. It asked the user, how do I find the room? The user gave a partial description of the plan. The robot understood that it was an incomplete description of how to get to the room. It asked a follow-up question. The human gave an answer to that follow-up question. The robot then determined that it now had complete directions on how to find the room, and it executed those directions. But in this case, the robot was right in front of a human. What if the robot is someplace in a building that's never been before, and there's no human nearby? And you have to give it directions and it has to follow the directions. Well, it has to go find a person. And then once it follows directions, it has to search the name tags on the room or the number tags on the room. And that's what the system is going to do, the second system. So we give the robot a goal. In this window is it's going to try to find a person to approach and ask for directions. Just driving around, looking for somebody. I guess the end of the hallway, it turns around and continues looking elsewhere. You can follow the log of what's happening here. Now, if the person, you don't understand what it means to be a virtual or if a person is using the voice of a robot or walking away from a robot. Go straight and take a left. Again, an incomplete instruction. What's the line? The room will be on the right. You're going to see it's going to turn the end of the hallway and it's going to be somewhere that it can make a left. It's got a plan for what it's going to do. And it's executing step one of the plan. It says go forward until the left intersection, then turn left, and then the goal will be on your right. It's driving forward. And then it's now in step two. It's going to make a left turn. It's going to suffer that and make a left turn. And now it's in step three. It's in the target hallway. Now it's going to look for doors. You can see it's detecting doors. And it's going to go to the door on the right because that thing called it to do. And it's approaching the first door on the right to read the door tag. And then notice it's going to move its camera. And it's going to scan for the door tag. It finds and reads the door tag. And it found its goal. Thank you. That's what I do. All right, next up is Professor John Song Zhang. Thank you so much, Matt. Morning, ladies and gentlemen. I'm John Song Zhang, a citizen professor at School of Construction Management Technology at Purdue University. So today we are going to show construction robotic systems for construction automation, for construction automation. And there have been some recent development for construction robots. And what you are seeing here are two models from Japan. The one from the left-hand side is a tightly operated robot that can be used for railway construction. The one on the right-hand side is a humanoid robot that was developed by AIST, an agency in Japan, again. And it can be used to help with the timber wood frame construction. As you can see, it is putting up a drywall panel there. So construction robots are here, right? Or are they? So by show hands, how many of you have seen a construction robot? Okay, we have three. And how many of you have seen construction robots in operation in person? Awesome, we do have one. Okay, so obviously construction robots are not widely used still in the industry, right? Why is that? Some background information of our construction industry, it has been lagging behind other non-farming industry in terms of automation and productivity. And there has also been this workforce shortage, which is pretty common phenomenon nowadays across industries, but it is especially serious in our construction industry. For example, 80% of the contractors are failing in funding skilled workforce for their trace crew. So the overarching goal of our technology, we do not want to reinvent the wheel or recreate construction robots from scratch. We want to build on existing solutions, but we want to enable them to be widely used on the job site and off site as well. I'll show you later. So the initial focus, we are focusing on the framing operations. There are two key technologies that I'm going to show today. The first one is a building information modeling-based constructed ability evaluation through logic-based AI reasoning. So this will be useful to help us evaluate if existing robotic system can be used to achieve certain construction operations. And secondly, we also work on redesigning and extending the capabilities of existing robotic systems to adapt them, customize, and optimize them for the construction operations. Okay, so what is the beam-based constructed ability evaluation? So the figure on the left-hand side shows the overall processes of this pretty much software system. You can think of it like that. It takes two types of inputs. The first input is a beam design. So this is the building design. And nowadays, we are using building information modeling instead of, let's say 2D plans, right? And we are feeding the system with the international standard for the beam data, which is the ISO standard industry foundation classes because it is neutral, transparent, and much more robust to work with. And on the other hand, we also feed it with a robotic system that you are interested in using on your job site, okay? So once the system will feed these two types of inputs, it will run through physics-based simulations as well as logic-based AI reasoning to figure out, are these robotic systems able to support your workflow to achieve the design that you want to construct? So the figure on the far right there, this is an example. Let's say if you are interested to see, okay, I want to assemble some timber frame walls and I'm interested to see if certain existing industry robotic arms can do this job, okay? So you can feed this robotic arm and your timber wall from the design into the system and the simulation will run and based on the logic reasoning and AI reasoning to tell you, okay, during the operations, are there going to be any trouble, right? So as you can see, the right color highlights certain limitations. If you are going to use this specific type of robot for constructing this piece of timber frame, for example. All right. So that was the software side. What if the existing robotic system cannot be directly used for the construction operation that you are looking forward to? Then we can come in and customize, redesign and extend the capability of the existing robotic system to make them work, okay? And that is the idea. So, oops, let's get too far. All right. So here we are showing you another example of a hardware piece that has been redesigned. Actually, this is a pattern pending device that we have designed specifically for the framing operations. There are a lot of benefits so using this type of, I'm not sure if it can, right? So it can show the video. So we combine multiple operations into one so you do not have to replace the end effector during the operation. You can place the material faster than it. And it's also not limited to, let's say, specific type of software, right? We have an IFC-based platform that can get the operation of this robotic device. It's lightweight and we customize it. We can customize it to be compatible with a variety of existing robotic arms, whichever model you want to use. And it was optimized for the framing tasks. We can use it both for the on-site construction and off-site construction. For example, in prefabrication scenario, okay? So we have tested this in a small prototype, the hardware piece together with computer vision algorithms to guide its operation. At a small scale, you can see some of the small scale wood pieces that we have used for the small test. And we also have tested the simulation, right? Together and in parallel with these actual operations. So the results were very good. So what we are looking forward to do next, we want to test it in real scale. So we are looking to the use of industrial arms that is heavy industrial arms. And we want to test it in our constructed lab. This is what it looks like. There's a two-tongue crane on the center, as you can see. And our students are building the two-story steel structure and timber wood structure every semester there. Okay, that's all. So if you're interested in this technology, please contact Mr. Halladay. And our team are also participating in National Science Foundation iCorps this fall semester. So as you can see here, Kennedy is one of our entrepreneurial co-leaders, right? So if you're interested in our technology, please contact Mr. Halladay or Kennedy. Thank you so much. Up next is Professor Mohammad Jahansahi. Good morning, everyone. I'm Mohammad Jahansahi, an associate professor in civil engineering. The problem that we are dealing with is the nuclear power plant reactors. The reactors are under the water. You have to periodically inspect them to make sure there's no tiny cracks to avoid any catastrophic events. In fact, as you can see in this picture, because the reactor is under the water, the direct inspection is not possible. What they do these days, they use a robotic arm that collects the video under the water. And then technicians go through the video, lengthy videos, and come up with a report that tells, okay, where the cracks are, how bad they are. As you can imagine, this process is very time-consuming, subjective, tedious, and costly. Furthermore, as I said, the most predominant damage or defect that they are looking for are tiny cracks. On metallic surfaces under the water, as you can see here are samples that you can see, the cracks are very, very tiny. The contrast is very low. It's very hard, even for human eye, to distinguish between cracks and weld, for instance. In addition, you might have very complex backgrounds. Like, for instance, you might have weld, you might have grind marks, you might have scratches, which even make it harder, because if you compare it to other surfaces, like concrete or pavement. So the solution we are proposing, and we've been working on, is using artificial intelligence. To this end, we have developed a software where you can open the inspection videos and using advanced deep learning techniques, what the software does, it goes through all the frames and come up with a probabilistic report, like the examples that you can see here. And it tells you in very long video sequences that you have where the cracks are, what is the thickness of the crack, what is the length of the crack, and it can even automatically provide a report. So you can, for the next round of the inspection, you can generate the same report and do the comparison to see how bad is the condition of the nuclear power reactor that you have. I would like to just, without going into details, just conceptually explain how this technology makes it, you know, how is it better from the other existing work in this area, because there have been a lot of work for crack detection and using computer vision. But the point is that researchers made me focus on processing one single image. But because we have inspection videos, as you can see here, I have extracted some frames from the video, and I can show you the tiny crack. Actually, I tried to highlight it by basically if I remove it, you can see it's very hard to see it. And then the thing is that if you run the AI algorithm or machine learning algorithm, you can come up with the bounding boxes. In this case, as you can see, some of the frames, the machine was able to detect the crack where you have the red bounding box. Basically, now you can see the enlarged view of it. The point is, though, in some frames, you may detect the crack. In some frames, you may not detect it. There might be some false positives. How you make decision, how you fuse this information to have better prediction. This is what makes our system different. And basically, this is inspired by human brain, because when you do the inspection, you may look at the crack from different angles. You may see the crack from one angle. You may not see it. You can realize that it is a scratch from one angle and make decision about that. With that in mind, I would like to show you some examples here. The blue lines are the ground truth that showing that there is a crack inside the box. The red one is the result of the software that automatically processes the video and tells you there is a crack there. The yellow boxes are the enlarged view. The scales are given. For instance, in this case, you can see the crack has been detected at the weld crown. Now, if you compare the performance of this technology with the state-of-the-art texture analysis algorithm for crack detection, if you want to have 0.1 false positives per frame, texture analysis will give you about 62% heat rate, whereas this technology gives you about 98.6, which is about 32% improvement. So if you compare this with the manual approach, it's going to be faster, more accurate and inexpensive. Now, when we were developing this technology, we were approached by several high-profile companies in basically nuclear industry. And they have signed disclosure agreements with us. In particular, Westinghouse currently is evaluating this software in Europe. And basically, as you can see, the nuclear power plant industry is growing. There's going to be more need for this. And interestingly, at some point, a company called GEA, it's a multinational food industry, headquartered in Germany, contacted us and said, you know, we have silos, metallic surfaces, of three to four diameter, very tall. We cannot afford to have tiny cracks because the growth of the bacteria will ruin the product, for instance, if it is milk. So what they do, they fly a UAV inside the silo, collect huge amount of data, but the challenge for them is to go through these lengthy videos and identify where the cracks are and look at them. So they are also using this technology and basically evaluating it with Westinghouse. They have shown keen interest to potentially license this technology after the period of evaluation. But as you know, cracks are everywhere. Pretty much they are the first indication of any type of failure. So we have been discussing this with many other companies in construction, road, wind turbines, pipelines. And again, the big picture is the infrastructure that we have, even though the grade is C minus, you can see for many of the infrastructure, we have D or D plus, as I said, the cracks are everywhere. So this can be extended to other market sectors. So for market size, the total inspection market is about $12 billion. If we focus on nuclear power plants, it's gonna be $3.6 billion. And if we go first, after the crack inspection of nuclear power plant reactor, it's gonna be $15 million. So our plan is that to enhance the system that we have based on the feedback from Westinghouse and GEA, and hopefully in about a year, licensing it to those two companies and then move on to the other sectors like pipelines, gas and oil and gas industry, construction and so forth. We have one published US patent, five US and international patent applications and one pending copyright. The team consists of myself two PhD students. We have two former students that they did a lot on this technology. They are right now, one of them is at Amazon. The other one is at Microsoft. Thank you for your attention. Excuse me. Up next is Professor Shamali Chatterjee. Okay, so I will be telling you about our project on the semi-autonomous deployment of drone swarms. So if you see drones swarming around campus, that's us. Okay, so this is about drone-based surveillance. Unmanned aerial vehicles are promising surveillance instruments and they have onboard cameras and sensors. And what we want to do is we want to monitor the evens in the ground that are not periodic or predictable and with high precision, so with minimum false positives. And we need to design algorithms for these drone swarms that can jointly optimize the detection rate for the evens and the flying time. So the typical flying time of these DJI drones is pretty low, it's about 30 minutes. So if we do this optimization, keeping an eye to the energy consumption by the drones, then we can basically increase the flying time as well while improving the detection rate. So there you see our drone swarm, so that is schematic for our drone swarm. Our contribution is that every drone in the swarm, it is controlled by a control program and that's where our algorithm lies and this is a patent pending. It eliminates the costly process of training drone pilots. There are two primitives that we use in our algorithm. Our algorithm is August, which is this semi-automatic controller and our components are the drone zoom component that monitors the ascent and descent, which is automated by our algorithm, and the drone cycle that determines the circular orbit. Now the way we do this, look at the pipeline at the bottom, we have data collection at inference, so we have drones that are flying at different heights. These images of the evens, they are captured at these different heights and then we have a deep neural network that is screening on these UAV images and then we generate precision recall prints over these different heights. Now coming to this, we have the covariance matrix for even detection from different heights and based on that, our August controller, remember I told you August has two components, the drone zoom and the drone cycle, so our August controller comes up with the optimal deployment configuration of the different drones in the swarm to get maximum detection rates in an energy aware manner. And then finally, that is our system setup. So you have the drone, the drone is interacting with the ground control system, which is essentially the controller and the controller is then interacting with our mobile GPU. So there is Wi-Fi connection there and this is a kettled bio-connection and this controller is doing what it has the August controller, which is dispatching the amounts to the ground control system that uses DeFi to control the drones and it's also doing the live mobile video object detection. So our technical contributions, what we try to do is we try to increase the coverage of the drones, right? We want to increase the coverage using the user defined number of drones that are provided, so the number of drones in the swarm is what is provided by the user and the user also comes up with what are the different requirements in terms of lag time, the delay time, so a point that has been visited by multiple drones, what is the interval between two visits on that point? So the swarm utility is a metric that we come up with to control the effectiveness of the drone swarm and our work achieves the maximum swarm utility. Now the swarm utility is a function of the flight of view and the slide of view is the flight of view of the camera which is on both on the drones and what we want to do is in order to increase the swarm utility, which is the optimization function, which is what the optimization function is solving to increase, we also want to decrease the flight of view for it, so we want to decrease the overlap of the flight of view, right? So this is what some of the parameters that our algorithm is taking into account to optimize the swarm utility, in this case maximize the swarm utility. So what we do is this is our typical pipeline, like I said, you have multiple drones, Barlis Ferry called E-Wi-Fi, which is connected to the brand control system and now we have an RTMP server, real-time messaging protocol and this is ingesting all the swarm, all the RTMP streams from all these drones and then we have object detection, so our video object detection plant is pulling these RTMP streams and performing live object detection and you actually see this is on campus, this is using our protocol, we have that acre, we have multiple cars and with very high detection we have prediction probabilities above 0.9 for the most part, we are able to do this on the fly using our protocol. So this is one of the components to just show you one of the components, a primitive that is being instantiated by August, which is our controller, it's the drone zoom, so if you have multiple drones, it tells you how is the automated descent going to take place so that you basically have the precision above a user defined precision, remember you do always 100% precision, so if the user gives you, this is the required precision, we make sure our optimization algorithm always has that more precision in mind when it's giving you the configuration of the drones to deploy. Take a base, so we're using a circular mobility model and we have the ability to control the delay time between visits at a particular point in the surveilled area, if you look here, SC is the static deployment versus dynamic deployment, so on the top you see static versus dynamic, the dynamic deployment is three X, this is our belt area and in this specific case, we have a lap time of 15 seconds, we also did experiments with a lap time of 10 seconds and we can basically control this delay parameter, this is a user defined delay parameter and while we are the best, we are the state of the art, we improve 150% in the detection rate for practical applications, consider any application that you want to think of which are hard to predict, for example, far detection, adversarial, internet or battlefield applications, et cetera, but if we have specific applications where the sparsity or the sporadicity of the events is even lower, then we can go up to over 200% and we use off the shelf object detection to get these high accuracy is to run this optimal configuration for these individual roles in this prone score and having said that, we're using off the shelf but we also have our own patent pending thanks to Matt and Andrew at OTC, we also have light reconfig, which does adaptive object detection, so what it can do is based on the complexity of the venues that you see under specific condition and based on the resource content. So remember, these mobile phones, they have multiple things happening at the same time, right? So at the time, you're looking at your phone, you have Siri on your phone, so depending on how much resource content there is on these mobile object detection platforms, we do that in a cost content and resource content available, so we have these off-adapted video object detection protocols which we also are running on our phones in the prone score. So with that, I actually have a demo to show you that this works. Just go again. In our surveillance system, the group is deployed in a parking lot and using its video feed and and audio detector to observe cars in the area. To reduce the post-poster, as soon as the group observes a car with a precision below a partial, it descends to a lower altitude to verify if it is a true positive and the group observes the car from the lower altitude, it will out to the cars of our station. Also completing the observation, it automatically descends back to its original position using the viewfinder application. And that's our team. So now we're going to shift to a couple of our produced startup companies. The first is ISEN. Fortunately, they're unavailable in person, so they have recorded a video for us that I'm going to play now. This team will see you all present. ISEN is a startup based on its new technology. A one-night summary of ISEN's work is discover who currents. We non-invasive need major electrical currents with very high accuracy and utilize the measured current to discover something useful. ISEN's technology is two-fold. The first one is an accurate and non-invasive current sensor. It is a clamp-style sensor, so we don't need to power down electrical systems to install the sensor, which is an important feature that others do not provide. It supports full DC and AC measurement and has a very wide band range from 1,000,000 to 1,000,000. The system is cooling integrated from the sensor to the cloud. So after the installation, the user just needs to go into their website to check and monitor the status of their system. Because of this ultra-high accuracy, where others see noise, we see currents. And by analyzing the currents and learning from the currents, we can not just monitor the status of your system, but also diagnose and predict the danger of failures. There are four important markets that ISEN is focused on. Power-efficient, electrical failures, telecom and smart building and battery. Indeed, power-efficient products can monitor and diagnose the conditions of devices such as lightning conductors, transformers, generated motors and power transmission lines. Indeed, EVs that are ISEN's products can monitor and diagnose the current conditions of electrical engines, lead-related braking systems, and charging modules, and can be used for EV testing, particularly. Indeed, telecoms that are ISEN's products can monitor and diagnose the current conditions of telecom modules such as road beam and understeering vehicles. We can also use it to optimize the power efficiency of cloud servers. In the smart building and the battery sector, ISEN's product can monitor and diagnose the current conditions of HVAC, cheerlers, elevators, and accelerators. Among the four market segments, ISEN has reached out to a few potential customers in the power-efficient sector. First, NASDAQ's many positive responses particularly in the four specific products shown on this slide. A lightning arrest is a device that destroys existing charges to the ground when lightning strikes power stations to attack it because it describe damages the device gradually to the company's regulatory microphone and replace the damaged right investors. The problem companies have is that they don't know when is the right time to replace them. ISEN's product and the exact time for the replacement measure result can save cost. And with that catastrophic failures, power generation workers are very extensive. And literally the companies monitor their status on a regular basis by stocking, assembling, and instructing each component, which costs a lot. And even so, they sometimes miss defects. ISEN's product can influence the type of defects developed in the platform by analyzing the platform data without starting and disassembling it. Transporters age over time can be very short made. The company's need to replace them in 4-bit paves. However, finding the right time to the replacement is not trivial. That is why we particularly see exploding transporters. The companies want to know when is the right time to replace them, and ISEN's product, and monitor and analyze the transporters to replace trans, and predict the right time for the replacement and save cost. Power generation fevers are damaged over time by thermal processes and by animals, such as mice and insects. Because most people are buried underground, these are the damages that are visible in most cases. ISEN's product can tell the benefits and potential failures by monitoring and analyzing the leadage current. ISEN is actually providing uncertain services to aid South Korea to the company, designed for a 10K contract for right investors and a 20K contract for generator mortars. And by the promising outcomes, the company plans to give a 100K POSC contract to ISEN, and the POSC from the negotiation is in its final stage. Because of the very positive responses from the industry, the company that resorted in the consulting and POSC contracts, ISEN's usual purpose has been on smart food. And we will start to generate initial range that we use in these smart foods at first. Listen to the POCO, a data-analytic company, providing services to many telephone companies and a few EV companies. So take a short benefit of the strategic partnership with ISEN. And we are happy to announce that POCO and ISEN have agreed on a strategic partnership that will allow ISEN to utilize the POSC business development marketing and sales power and be a total of 700K investment. ISEN will prepare its business in the telecom and EIS actors through the partnership. So if you have a system that uses electrical parts and if you need not intelligent diagnosis of your system, we are here for you. Please contact us. And if you are in a strategic partnership, COVID is development for investment, please contact us. Thank you very much. And up next is the CEO of WaveLogix, Jennifer Roots. So WaveLogix at a high level is interested in making infrastructure smart so that we can improve its integrity and its safety and also the performance and efficiency of those who build it. Specifically, what we've developed is, actually, maybe I'll go back so you can see the picture of the device. Specifically, what we've developed is an IoT sensor that can collect a sample of freshly poured concrete and determine the strength in real time, in place, in the actual slab of interest. And why is this important? As many of you in the room probably know better than me, we need concrete to hit certain strengths in order for it to function as intended. So we don't want to be fixing potholes every six months in roadways. We certainly don't want catastrophic events, collapse pedestrian bridges or other bridges or multi-story buildings, which can happen if concrete doesn't meet required strengths before we allow traffic on that structure or remove forms from one level and start building a second level before the first level is strong enough. So it's really important that we know the strength of the concrete. The problem with this is that, currently, the industry wants reliable, real-time, in-place concrete strength information. But really, today, we don't have great solutions for that. Pretty much every current method for testing concrete strength involves preparing samples of concrete that we then destroy to test strength. So we're not getting in-place data. We're getting data based on much smaller structures that we then use to estimate the strength of the structure itself. And this is a cumbersome, time-consuming, often error-prone process with a lot of waste involved. So there are two current primary methods that are relied upon for testing concrete strength. One is destructive testing. Again, when we're pouring any structure, say it's a highway or a bridge, at the same time, to the side of that, we will be pouring dozens upon dozens upon dozens of cylinders and beams. These might be 6 by 3 inches, 4 by 8 inches, much smaller than the structure we're building. And because they're so much smaller, they will cure in a different way than the structure we're building. They don't generate the same level of heat. They don't tend to hit the same level of strengths. And so what happens often is we'll get a false negative. We might get a low break on a cylinder or a beam. And then the contractor may spend many, many hours trying to chase down, why do we have this low break? Is there really something wrong with my structure? Or is it the cylinder that's been prepared? There are a number of quality control issues that arise with cylinder preparation, whether it's preparing it, curing it, transporting it, maintaining it, and then ultimately breaking it. There are lots of steps in the process that can go wrong. And so we very often do get a false negative or a low break. So this is very labor-intensive and very expensive. A lot of quality control labor involved with this process. And in addition, there's a significant amount of material waste. So imagine all of these samples of concrete being prepared, then they got to go someplace. So either a landfill or if we're lucky, maybe they get recycled and used as a crushed up building material someplace else. But it's a lot of waste. And in addition, because the industry knows that cylinder samples tend to break lower than structures, the industry as a whole over time has begun to over design their structure. So we add more cement than is necessary to the structure so that we can be as sure as possible to hit the strengths that are required. Because you have to hit these strengths not only for safety and the quality, but because this is how our contractors get paid. They will submit to the payer on the project proof that they've hit the strength required by the specifications of the architect. And if they can't deliver that, they don't get a check. So as you can see, this is really important. The other piece that's important and why destructive testing is a challenge is because it's not in real time. We might break these at three days, seven days, 14 days, 28 days. But we, as contractors, want to know much more quickly what is our strength? When can I open a road? I want to open in a few hours so as not to frustrate taxpayers. But I don't want to open too soon because then we may destroy the structure and have to remake it, which also frustrates taxpayers. So this real time, not being able to have real time information with concrete breaking is a real challenge. The other current method primarily used is maturity method. And this is similar to our product, which I'm getting ready to describe in more detail. Similar in that it's a sensor. And they're deployed very similar to our product. You place the sensor in the freshly poured concrete and you get information back. The problem with maturity is that it to relies on concrete breaks. So in order to predict concrete strength using the maturity method, you have to have an advance prepared samples using the concrete mix you intend to use in your project and then break those and build a calibration curve, a maturity curve. And then you use that to predict the strength of the concrete using the maturity sensor. The maturity sensor relies only on the temperature of the concrete and time that has passed from the moment it was poured. And through that, we can predict the strength over time of the concrete. But again, although this gives us real time data, which is great because what we want to do is accelerate construction schedules, the challenge is that we're still having to break concrete samples to get there. With our solution, our rubble sensor, again, it's an IoT sensor. It's electrical impedance based. We embed a sensor in the freshly poured concrete and we don't have to have a maturity curve. This is not mix design dependent. With maturity, once I build a curve, if I change my mix design, which happens all the time in projects, if I change my mix design, I need to build a new curve. These take at least a week usually and there's several thousand dollars to have that developed. Here, there's no maturity curve required whatsoever. We're not mix design dependent. We are a direct in place measurement of concrete strength. We're not measuring a small sample. It's the actual slab. We give real time data. The user can be sitting at home on his couch at night looking at where is my strength today. They'll see an actual number in PSI showing them the strength they've hit as of that moment in time and they can then say, okay, I need my guys at six a.m. on the I-70 bridge because we're ready for the next step. Or if it's not ready, they can send them someplace else. So you can imagine the efficiencies that are created for contractors and engineers. Also, there's a longer term monitoring capability with our product. So with breaks, you can only monitor for as long as you've got a cylinder to break at a particular day. Similarly with maturity, really that's only good for about three days because internal temperature of concrete stops changing after that point. So I can only tell you for about three days what my strength growth is with that method. With our method, as long as the sensor's embedded, which is gonna be forever, and it's plugged into our data logger, which you can see in this photo here, you can collect data as long as you want. This is a rechargeable data logger. So again, our solution relies on electrical impedance. So it's basically a cup, a reservoir that captures freshly poured concrete, and then center of that cup is a PZT, a piezoelectric sensor that provides frequency information relating to the concrete. So we capture the modulus, the elastic modulus value of the concrete itself, and then that is transferred through the cloud to our server where we have a proprietary algorithm that converts the modulus to PSI, which is what contractors want to know in the industry. So this information, again, is transferred back automatically to whatever device they're using, a computer, mobile phone, whatever, they'll get a value of PSI. Here I've got a very short example of how this works. You really have to, it's like a small town with one stop sign. You got to look really quick because it goes in about five seconds if I can get it to play. Matt, am I? Oh, there it goes. Okay, so basically the reservoir I described, the sensor itself, you just lay it in the groundwork. I'll maybe play it a couple of times since it's only five seconds, a few seconds. And you literally just let the concrete fall on top of it. You don't have to secure it. If you're doing this with a bridge, we would use some zip ties or something to secure it to the rebar. But in groundwork, like a pavement project, as we were on here in Fort Wayne, it just literally dumps on top, captures the sample of the concrete and then can take its measurement. Okay, so progress to date. This project started through a commissioning by the Indiana Department of Transportation. They reached out to Dr. Liu, who's in the audience today and her lab to see if she could help them solve this problem of really wanting to get traffic opened sooner but not destroy the structure they've just spent a whole lot of money to build. So that was in 2017. Since that time, there's been a significant amount of lab testing, infield testing, prototyping, re-prototyping, re-prototyping. We've basically gone from kind of a microwave size impedance analyzer down to what was maybe the size of a garage door opener and eventually we made it a little bigger because we didn't want these things to get lost on construction sites or destroyed too easily. And now we have officially spun out of the Lyle School of Civil Engineering and licensed the technology from Purdue. In 2021, just last year, we were named as an ASCE game changer. They select a number of innovations every year and we were one selected, very proud of that. And now this year, we've transferred our prototype to an actual small badge manufacturing run and those products coming off that manufacturing line will be used in a beta release. We've got seven DOT projects scheduled across the country that we're going to be conducting testing with customers. They're gonna be actually using the device themselves and giving us feedback so that we can make any refinements needed for a planned commercial release in 2023. This is our fantastic team. Again, Dr. Liu and Joe Shutterly, who's one of our early investors and just an expert in concrete finishing. You won't meet somebody probably that understands concrete better than this guy, so super lucky to have him on our team. And some fantastic engineers who are students at Purdue. Jihao is a PhD student. Henry is still in his undergrad program in computer engineering. Andy Aldeges just recently, well, not so recently, but graduated from Purdue undergrad and then his master's from University of Pittsburgh. We have a CFO on team who's amazing. And we've really generated a lot of traction and support in a short amount of time through the great help of Purdue and the, and in DOT and local concrete providers, producers, great companies like RL McCoy and others who are willing to put us in their projects and give us a chance to develop and refine ourselves. So thank you for your time. And the last presentation is gonna be from James Gopert. He's gonna give a quick overview of the drone show and then we'll proceed out there when he releases us for the actual demonstration. All right, I'm very happy to be here. I'm Dr. James Gottford from Arrow Astro and I'm also here today talking about Pert with Professor Yongsing Lu from ECE. And this is kind of a unique facility that was just built in the last couple of years. And I wanna talk to you today about how it can be leveraged for smart cities. And we're actually gonna see a live demonstration of the technology of kind of our ground truth that we used to do all these really cool things with drones outside. And to start us off, the Pert mission statement, so Purdue UAS Research and Test Facility is to provide a world-class indoor motion capture environment for unmanned aerial systems research that attracts the brightest minds in the field and fosters autonomy education. So why are we so cool? We're the world's largest indoor motion capture facility with 20,000 square feet and 30 foot ceiling. We basically retrofitted a 1960s aircraft hangar with a million dollars of motion capture cameras. And these are mixed reality, enabling sensor emulation, enabling real time control feedback, which is what you're gonna see out here. The drones that are gonna be flying are getting a message from the motion capture system that we set up in this building at about 10 frames per second. And we also use this for ground truth. So when we're doing all of our cool UAS development, we need to know what the right answer is. And this provides us with a way of determining when the vision algorithms, we're trying to make drones more like humans. So when they're using their eyeballs and they drift off track, looking at the landmarks in the room, we can compare algorithm A to algorithm B and say why one algorithm is better. So as part of this, we're working on urban air mobility with NASA and looking at wind disturbances when you're trying to do a transport of people in an urban area up in different spots. How can you do this reliably with wind kind of blowing you off track, mathematically computing balance. And as you mathematically compute these bounds, we wanna be able to apply that in the lab and verify that those bounds actually hold experimentally. Another research project we're doing is cybersecurity for an urban area like drone mesh network. You can imagine in a disaster situation, the internet goes down, which would be horrible for everybody. These drones actually fly up and this is actually in the UAE, we're working with this research organization, TII. And they go up and form a alternate internet. This alternate internet of course could be hacked by people that were nefarious. And we're trying to secure that and look at how we can best do that. And another cool application, our team just made it to the final round of the National Institute of Standards US Triple Challenge. This challenge is specifically looking at lost hiker search and rescue. So we've actually gone down kind of a couple miles south of here out into a forest with the permission of some very nice people and recorded students walking back and forth in the forest with a radiometric thermal camera that actually measures human body heat. And we want to basically plug this into a neural net and our neural net detects the humans as they see them. But there's a huge occlusion problem here and that's something that the team's dealing with right now before we go to Mississippi in June, end of June. And basically being able to send this to a neural net to detect it's a human and not a tree, you have all these trees and branches in the way. So you have to basically look at it as a time lapse and kind of stitch all those images together. So you guys can see this drone outside and set up so you can go ahead and take a look. We also are doing a collapse building search and rescue with the Air Force. We're working with the Society of Women in Engineering on the annual team tech competition. And this year, if you guys remember the building collapse in Florida, we're flying a drone into a window trying to map all human body heat and pet body heat and a 3D map that firefighters can hold up and look at and say that there's a human here, there's probably a pet here and figure out how to go save them. So that would save a lot of human lives if something like this happened in the future. That drone is not here today, but we do have this drone, time critical medication delivery. We're working with the School of Nursing in biomedical engineering. And this is kind of a cool application. If somebody has an opioid overdose in an urban or rural area, an ambulance has to get to them in 15 minutes or they're in trouble. So basically, per vision statement, we wanna be hosting annual competitions, we wanna be known as world-class facility, accessible, sustainable and synergistic. And I'm going to leave it to you guys to get outside, check out the Drone Swarm Light Show. And if you could please avoid using Wi-Fi because that's a real-time signal with the drones. So if they fly at you and you have your cell phone on, I'm blaming you. So, all right. Thank you. Yeah.