 Okay, I think we can get started. So, welcome to the Engineers for Exploration, the E4E summer research forum. So I'm Ryan Castor, I'm one of the E4E directors along with Kurt Schurders here. We're excited to have you here today so we can show you some great progress that the projects have been making over the summer. So if you're not familiar with E4E, we are a research program that developed technologies to help us better explore the world. The students that work in this program, they work closely with a number of scientific mentors from different fields, ranging from biology to ecology, oceanography, archaeology. They work with these scientists to try to understand their challenges, and then they work in teams to develop technologies to make it more effective and efficient for the scientists to collect, analyze and understand their field data. They often even go into the field with these scientists and deploy these technologies alongside of them, collecting data for them and taking that data back and trying to figure out what happened with it all. These different technologies include drones, underwater depth cameras, acoustic audio recorders, software defined radios. You hear a lot more about these here in a little bit. So we had 35 students participating in some manner on some form this summer in this program. The 13 of those were funded by the National Science Foundation, REU site. So that's a program that we've been running. This is our ninth year now. And it allows those students to work full time so they get paid fellowship, they get housing paid for, and they get to work full time on these projects. We also had a few fellowships through NSF for STEM fellows. So these are scholarships for science, technology, engineering, mathematics, target at low income and academically talented students. So students that deserve an opportunity to do some great research. There are a number of students from across the country, so we had a large number of course from UCSD, but we also had students from Howard University, College of New Jersey, Olin, McAllister, Cal Poly, San Luis Obispo, Cal State University, East Bay, Allen Hancock College, and probably a few others that I'm missing there. So in addition to NSF, we'd like to thank the number of other people for funding the E3 program. It wouldn't be possible to have this run this program without a number of supports from these programs largely from campus. So the Computer Science and Engineering Department, the Electrical and Computer Engineering Department, the HALA-GOLU Data Science Institute, the Qualcomm Institute. So we'd like to thank our collaborators. We had a number of scientific collaborators from across the different institutions, the Scripps Institution of Oceanography at UCSD. And San Diego Zoo has been a longtime collaborator of ours, Tom Garrison from UT Austin in archaeology. So we thank them for Bill Bresnahan, Todd Marks from Scripps. So you'll probably see a lot of their names in the videos here in a second. So this is going to work today. Each of the project, we have about eight to 10 projects active in engineers for exploration at any given time. So I think we had about that many this summer. Each of those will be prepared a video, and we're going to watch those videos. I encourage you to make this interactive. So feel free to type some questions in in the chat if you'd like or raise your hand. After the video stops or after the video plays for the project. So each video is about four minutes or so five minutes. We'll stop and we'll answer questions. So allow the students to answer any questions that you may have typed in or you want to ask live, and I'll do my best to try to moderate them. So it's my great pleasure again to have you here and to present all of this great work that the students have done this summer. And so let's get started with the videos. How many fish do you think are in the world's oceans? Millions? Billions? Try 3.5 trillion. And how many of those fish do we take out of the ocean every single year? 1.75 trillion fish, which is 50% of the global fish population. And this only accounts for fish caught legally. If we continue at this rate, our oceans will go from looking like this to something more like this. And a world that doesn't support fish doesn't support humans either. There are very few methods for effectively evaluating fish populations and many of those methods are invasive and detrimental to fish health. Some current methods include by hand measurements, the use of stereo cameras, or laser calibers. All of these fail to effectively and efficiently capture fish length. This has left scientists with a lack of data making it difficult to study fish populations. UCSD's Engineers for Exploration have spearheaded this problem with the development of Fish Sense. With its novel and compact design, the Fish Sense platform contains state-of-the-art technologies to image and measure fish. By combining the Intel RealSense depth camera D455 with model machine learning algorithms, our system is able to capture fish data providing scientists insights about fish length and biomass, significantly faster than hand measurement methods. My name is Maddie, and this summer I've been working to improve the performance of Fish Sense by upgrading the system hardware. We have decided to upgrade the system processor from a Raspberry Pi to the NVIDIA Jetson TX2. Upgrading the processor will allow us to process images at a higher frame rate and in real time. In order to upgrade the system, processor compatibility testing was completed. The TX2 processor was flashed and connected to Kinectek's Orbiti carrier motherboard, where it was then connected to the Intel RealSense depth camera. The RealSense SDK was downloaded and the C++ code was built and run. The system underwent various performance testing and a power analysis to confirm that the upgraded hardware is more robust. Functionality of the TX2 was investigated by accessing the TX2's GPIO pins and by SSH-ing into the TX2 without internet, which will make deployments run much more smoothly. Hey guys, it's Raghav from the software side of Fish Sense, and as part of that team, that meant that I had to work on improving and integrating the model into the algorithm. Improving the model entailed that I had to decrease the number of false positives, such as from rocks and debris, while also increasing the accuracy of each and every bounding box to be more tightly bounded around each individual fish. As such, here are some pictures of the old model at work, and here is the improved model at work. As you can see, there are still some problems with some discrepancies here. In general, the boxes are more tightly bounded around each and every individual fish, as well as the number of false positives is greatly decreased. With an improved model, I then worked on incorporating the depth data to make the measurement fully automatic. So the depth data is returned as a CSV as seen here. Here's that same CSV colorized. And then what I do is I take the points from the corners of each and every bounding box, and then I just use some simple Pythagorean theorem, A square plus B square equals C square, to find the length of the fish. So the algorithm has now spit out that the fish is about 0.35 meters, which is about one and a quarter foot. Hi, my name is Ronan Wallace, and this summer I worked on the software for FishSets. With the help of our collaborators at Scripps Institute of Oceanography, we were able to deploy our system at the Birch Aquarium, where we were able to collect a vast amount of valuable fish data. As you can see in the video, a scuba diver is handling our device, capturing both RGB and depth data simultaneously, each second. With our camera, as mentioned previously, we are able to capture both RGB images and depth measurements. This allows us to detect fish and get their head to tail length and calculate biomass. From these calculations, we are able to understand fish health and population a little bit better without harming the fish or their habitat. We submitted a paper that was accepted to IEEE OES Oceans Conference for Publication, detailing our device and how it works. And currently, we're writing another paper detailing our hardware and software upgrades using our newly integrated GPU, which will be submitted to the IEEE Embedded Systems Letters Journal. Thank you so much for watching. If you have any other questions, feel free to reach out to us or visit e4e.ucsd.edu for more information. Thanks. So I didn't see any questions, but this team was one of the teams that had a couple of people or several people local. So this summer, we got kind of at the last minute, we got the ability to bring some people to campus. And of course, heavily tested and following all the protocols that were necessary. But they got to go on a supposed to be last week, but this week they got to go down to Mexico and go off the coast of Mexico to a fish farm, a striped bass aquaculture farm. So I don't know. I think maybe Ronan is here. And Ronan went on that expedition or the trip. I don't want to say anything about what you did. I haven't even heard what they've done. So this is new to me as well. So basically we took our device and had a day trip down there and met with our collaborators at Mexico. And they took us out to the island. It's Isle de Toro Santos. And we were able to go to their entire fish farm and understand their process. From growing to their fish to the end processing. And we were actually able to go to their R&D fish cages and be able to deploy fish sense and kind of get a good idea of a good idea of how what works what doesn't other improvements we want to make and kind of the direction we want to take the project. And so we have a lot of great data from that. And we're really, really excited to use that for further model training and improving our system. Cool. Very exciting stuff. Okay, are there any other questions for fish sense team. Yeah, yes, I had a question. Any, my camera. Sorry. Charles Kinzel E4 E 2019. Hey Charles, nice to see you. Dr. Casper, Dr. sugar is good to see you guys. I noticed on the video that there was a screenshot of some fish with bounding boxes and, and then there was like a number. So, so your application, it's, it's not only measuring the fish obviously but it's counting the number of fish that pass through the screen. Is there a part of your algorithm that accounts for occlusions. So if, if two pack fish pass, is it keeping track of those numbered fish, or does it renumber them as it loses track of the fish. So I was in charge of the bounding boxes and software side. So it goes on a frame by frame and per each frame accounts the number of bounding boxes that it sees. So it's not, it's not tracking them, because I don't know when you fish might appear out of the out of the background like towards us or something. So if I can't count for that, I just go frame by frame so it does struggle a little bit with occlusions, but then like two frames later, the new fish are separated so that's that was my logic by the reason. Got it. And if no one else, I've heard of the Jetson nano you mentioned the TX to how does it's an Nvidia development kit I'm assuming how does that match up to the nano or, or maybe just does it have how many GPU cores does it have. And the text to is definitely an upgrade from the nano has a higher higher speed processor. And it's double the course GPU so 256 is supposed to 128. Thank you guys, thank you. Great. Thank you. I think we can move on now to the to the next project. My name is Jacob iris I'm the current project lead for the automated acoustic species identification team at the engineers for exploration lab at UC San Diego. Right now is one am and I as a project lead and person I had a bit of an epiphany. I realized that being one of the lead programmers on my team this year. I am not going to be able to put in quite the amount of time and effort into a video project like I did during re 2020. I realize I mean going to shoot a different type of video this year. My goal is to show off what it's like to do research in the re program specifically in my lab. And hopefully I get to show off, you know, what sort of technical contributions I make what it's like leading a team was like to be an aspiring researcher. And I'm going off script and just gonna see how just film myself on a daily basis and tie it all together at the end through the magic of editing. We'll see how this goes. Today I am leading an expedition to the Scripps Coast Reserve. We're going to fill this area with audio recordings that will record for two weeks. And so, man, being someone that has that grew up watching a lot of documentaries and stuff like the boom of food. Steve Irwin and Dave Attenborough documentaries. It's really cool having grown up watching people go out into the field, listen to animals and such. It's really cool to have an opportunity to, you know, be in those shoes. Hello. Yesterday, Sean Perry and Mugen Blue, my two teammates, they went out to the Scripps Coastal Reserve with me. And we went around and took latitude and longitude coordinates of where we believe that we can set up low cost audio recorders to record bird songs. We're kind of developing two pieces of software concurrently. The collaborators requested we create an audio labeling system with a couple of different requirements, make it web based, make it so they label on a spectrogram. See if we can implement some basic user tracking, like how long to take them to annotate stuff. Ryan was saying you should really start thinking of how to combine the different stuff you've been working on into, you know, coherent package. And so I may have taken that too literally. I was like, well, I'll write a pipe on package. And so basically that led to what I called an automated audio labeling system and a manual audio labeling system. And so Sean Perry, since I don't have the web development skills, he's been primarily leading that. I've acted more as a kind of hands off manager that just kind of looks at the final product and says, yeah, this is good. Maybe make these tweaks. And then like, how can I make this labeling system complement what I'm doing? My real hope is that by the end of summer, we'll have the automated audio labeling system, PIHA, and the audio labeling system. We'll be able to kind of synchronize together well enough that once we have gone out and actually deployed the audio moths and retrieve them two weeks later, we'll be able to actually parse through that data in an efficient manner. Today is our first actual deployment day, not the first day on the coastal reserve, but the first time we're deploying audio moths as a team, which is super exciting. Here we have our eight audio moths all lined up. And we're going to go into the field. We took GPS coordinates. And so now it's just a matter of using the same app to return to them. So right now I am outside of Center Hall at UTSD. And I chose this thought because just under two years ago, I came up to Kurt Sugars after his EC-15 lecture and had expressed my interest in engineers for exploration. I knew I wanted to get involved in climate change and conservation research, and I wanted to apply my skills in computer science. At this time, though, I had never ever made a machine learn or processed any signals. Just today I finished uploading over 300 hours of audio that contains an endangered species called the California Natcatcher. It's a great feeling to have walked in the footsteps of many of my conservationists that I looked up to. I hope that the tools we have developed over the summer, you know, piha and pyrnotes, can be used for future conservation efforts and help make species such as the California Natcatcher be heard. Fantastic. So I don't know if there's any questions. I'll leave off with another question too. So you all just kind of alluded to it in the video, but just collected a bunch of audio moths. Also a fairly local expedition, but a good one that nevertheless, you want to talk a little bit about that and what your plans are for the next steps there? Yeah, so I think, I think a lot of this was motivated due to just general supply chain shortages, so they had an opportunity to purchase these audio moth recorders that our collaborators have used and we don't know how many opportunities in the near future we're going to get to purchase them. So we just kind of went ahead and bought some and it's like, okay, we bought them now we have to use them. Okay, what's the closest place to, what's the best local place to deploy them. And so that was actually on Nathan, Nathan, who is our staff engineers suggestion. And he was like he should probably check out the script skills to reserve so we had to go through like this reservation process and but they're, you know, is it okay she was super helpful in the entire process and and so we got 10 audio moth devices and we set them out to record about one minute every 10 minutes for two weeks. As for next steps, there, it turns out that there is a group of ornithologists that tour of the scripts coastal reserve reserve once every month. And, and so I started talking to them about like the work we're doing related to, you know, automatically segmenting bird vocalizations and classifying species. And they seem interested in that so the hope is that we can get them to label some of our audio data and and test out, you know, the piha and pirate note systems that we have. Yeah, and if any of my teammates want to add in any notes. Your questions, not I think we can head on to the next project. On the picturesque Laquipia plateau of Kenya, Dr. Shirley Strom has been studying several troops of baboons for nearly five decades. One of our aims is to better understand their collective decision making with troops sizes varying between 20 to over 150 individuals. It can be hard to keep track of both group level and individual level movement trajectories, particularly since the baboons prefer rough terrain and large boulders of sleeping sites. To improve her logistical ability to track multiple animals, she reached out to the team at engineers for exploration, where we set to work using drone footage to get a bird's eye view of the entire troop. Unlocking information about baboon behavior that has not previously been available. We've processed this information by using a custom algorithm to remove foreground baboons from their background. To do so, we start with a set of previous or historical frames. Using the current frame as a reference, we adjust the historical frames to match the reference frame. The adjusted frames are then combined using an intersection algorithm. The intersections are then union together, generating a single reference of what the background should look like. This reference background is compared against the current frame to extract the moving baboons. We can then convert each baboon into a single centroid using blob detection. The output centroid of the blob detection represents the results of our motion detector. This motion detector provides an effective basis for tracking baboons. We've explored other possible solutions, like applying deep learning-based object detection, but these methods prove to be less effective than background subtraction-based motion detection. This is largely because baboons are difficult to identify even for humans without picking them out through their movement across frames. This is largely because baboons are very small relative to the resolution and field of view of the image, often taking up less than 1% of the frame, and because they have a very similar color to the ground and brush. Although the motion detector is a good starting point, it presents a number of problems when used on its own that need to be rectified before it is accurate enough to be usable by our research partners. These problems were the main focus of our work this summer. The first and most significant issue is that the motion detector will not identify baboons that are not moving, even for a small percentage of the time. The detector will also not pick up baboons that are occluded by trees or underbrush. Finally, there is still a small but notable amount of noise after the frames are transformed to overlap. In terms of usability, the algorithms python implementations are prohibitively slow and can take a few seconds to process each frame. Our main approach to solving the stationary baboons occlusion and noise has been through the use of a class of techniques collectively termed Bayesian filtering. Specifically, we implement an test-day particle filter and a Kalman filter-based approach to baboon tracking. Our particle filter works by comparing the expected positions and directions of our baboons to the bounding boxes that the background subtraction algorithm has identified. A certainty value is then calculated using intersection over union to characterize the dissimilarity between the theoretical and actual position of the baboons, at which point we take up baboons that don't specify probability threshold and repeat the process for each frame for the duration of the footage, causing the baboon particles to slowly converge onto the baboons position. This is a simple and direct approach which makes it attractive. The particle filter helps us track baboons better than we could before when stationary and occluded baboons run tracks. But when trying to implement the particle filter, we found that the filter didn't adequately track baboons that have moved. This causes the filter to continuously add new baboons to the frame. To correct this, we intend to review the calculation for our certainty value to ensure that it more accurately identifies the same baboon frame to frame. Our alternate solution is a Kalman filter. This uses a simple baboon motion model to predict the movement of the baboons each frame. The smaller and further away the bounding boxes from the predicted location of the baboon, the less we trust the bounding box. This is less computationally intensive than the particle filter but has many of the same benefits. It can track stationary baboons and reduce noise. In practice, our Kalman filter is very successful, it has no problem tracking stationary baboons, and successfully rejects small amounts of noise from the background subtraction. The main drawback of the Kalman filters that requires the user to indicate the number and the initial locations of the baboons. This is not an inherent limitation of the Kalman filter. We could use a heuristic to identify the initial positions of the baboons. But it does make the filter more robust and easier to implement. We think the small amount of human intervention is a reasonable trade-off for higher accuracy. We explored a number of methods for denoising the input image, but most of them did not yield significant difference in results. Our algorithm originally operated on a grayscale engine, but we tried feeding in the red, green, and blue colour channels instead. We found that while the performance of the blue and green channels were similar to grayscale, the red channel produced more false positives. We tried using huge saturation values as well, and we'd like to continue considering more ways that we can mix the blue and green channels to produce better results, as these were the best performing colour channels. Because the original Python implementation of our background subtraction algorithm was quite slow, we re-implemented it in C++ and added GPU acceleration to get a more than 10 times speed-up on our JetSanano over the original Python implementation. Because of this performance increase, it's now possible to use it in the field without weighing significant amounts of time for it to process data. Our next steps involve adding a user-friendly interface so that the algorithm is readily accessible to our researchers. We'd also like to acquire more data to test our algorithm. Even though there is more work ahead of us, our progress this summer substantially improved the accuracy and runtime performance of our methods. And we believe we are much closer to the point where our work is ready to be deployed and used to help scientists in the field. Another great job. So I don't know if the Baboon team knows this, but I think Shirley mentioned that she's retiring this year or sort of semi-officially retiring, focusing more on the research. But there is talk of National Geographic doing some sort of, I don't know, press or show on her career at that in Kenya. And it definitely would continue on. So she's hopeful or excited to have the team, or parts of the team go hopefully to Kenya sometime next year. So hopefully that will happen and you get to not just take all those cool videos that they did, but actually go there and see the Baboons and count them for yourself. Any questions for the Baboon team? Yes, I had one. So, you mentioned that to, to track the baboons required a bit of human intervention and that was that on the first frame you, I guess you marked the baboons that you wanted to track. Is that the first frame in any video that's being processed? Can you, or is that a relative first frame? Can you stop, can you stop the video or something and mark them then or is it just one shot kind of thing? Yeah, you, if you mean to ask, like, is it possible to later correct it? Absolutely. Yeah, it's totally possible and very easy to make it so that the user can go and change and incorrect the Baboon position estimate and then that will be taken into account on in later frames. So, so that's possible and definitely something that, you know, we haven't developed like a user interface yet, but something that would be part of that user interface. Thank you. Any, so in the past year or a couple years from this project, we had significant challenges and just took forever to run those videos. Those videos were really high frame rates, but we needed the, you know, the high resolution because the baboons are so tiny. So, any, any insight onto what really kind of made that almost real time now that was a really big improvement that we had this summer. So, can you talk a little bit more about how you did that? Yeah. The nice thing about the algorithm is that almost almost every step up until the really the very last step where you extract contours from the video can be performed relatively easily on the GPU. So, there's a lot of one thing is there's a lot of operations which are either masking or comparisons and so doing comparisons or element wise effectively bitwise operations on all the pixels is pretty easy. The other thing is there's it, we can set it up so that there's very little. We don't have to allocate new memory for each frame because it's a fair amount of memory we can actually do almost everything in place. The other thing is that this wasn't a design in the original Python implement implementation but when re-implementing it, it was kind of from scratch. It was easy to make it so that each run of, you know, each run of the pipeline on each frame could be totally kind of separate. So we could run it in multiple threads. So all of those together really contributed to a significant performance increase. So we can kind of add to that. The original Python implementation, while it used NumPy in its laws backends, it was limited to the number, the Python implementation itself was single threaded and does not run any of the actual video processing that that said, BLAS is capable of running various of these major computations in parallel on multiple CPU threads. So we do get some benefit there. So I think one of the biggest things that kind of comes out of this is we switched from using my 16 core CPU to where we run on the Jetson Nano for 128 cores and hopefully even an even better speed up when we run on the GPU in my computer with a thousand or 1,280 cores. So they're kind of like really expressed the increased throughput that basically by using CUDA, by using C++ to be able to unlock because we've been able to do stuff in parallel. Any other questions and feel free to use the chat if you don't want to chime in. chat is a very easy way to ask some questions. So please use that at any time. All right, so let's continue on to the next project. Thank you. Hi everyone. My name is Nathaniel Eastloss. I'll be presenting the Bering Owl classification project I worked on this summer at the engineers for exploration. A little bit about myself. I'm a senior at California State University East Bay studying computer science and statistics. So how did this project start? The engineers for exploration scientific collaborators at the San Diego Zoo wanted a better study the Western Bering Owl because the local population is rapidly declining and at risk of going extinct in San Diego County due to habitat loss. The plan they devised to monitor and study these owls was to set up camera traps near owl burrows around San Diego. To the right is an actual picture captured by camera trap. The owls and habit burrows that are mainly dug by ground squirrels as well as prairie dogs and even tortoises. So how are we contributing to the efforts? The problem that the researchers encountered is that the camera traps work as designed to capture an image at any sign of movement. However, this results in a large amount of images that are not necessarily of interest to the researchers since they don't have owls in them. Last year, E4E started working on a solution to this problem by developing a machine learning pipeline to automatically label the camera trap images. This summer I worked on the system in which the researchers could easily interact with to have their data labeled. The project lead Justin wanted the system to be simple to use with little input required of the user. Essentially, the system would prompt the user to input the folder's file path that contains their data. Each file would then be fed through the machine learning pipeline and the images that have an owl are organized in a new filtered images folder along with a CSV of the prediction results for each image. There are a few steps in the pipeline to detect the presence of an owl in an image. The first is that the folder of images is sent to a detection model that provides the coordinates of bounding boxes around objects of interest. In the second picture, we see what one of the bounding boxes looks like around the two owls. This step increases the accuracy of the owl classification model. The second step crops an image to create a sub-image to be analyzed by the owl classification model. The third step analyzes the sub-image passed to the classification model. If the cropped image is predicted as being an owl, then the entire image is classified as owl. If there are multiple cropped images for an image and at least one owl, we predict owl for the overall image. Here, we have an example of the system's output. We see the main filtered images directly on the left, which contains two sub-directories that I have tested the system with. The sub-directories are organized according to the file name of the original folder processed by the system. Each sub-directorie contains images. The model predicted the presence of an owl and the CSV with the prediction results for all images in the folder. On the right is an example of the prediction results for a folder. It contains information on the file name, whether or not there was an owl, and the minimum number of owls detected. Overall, that was successful in creating a system that was easy to use and that will hopefully be of much help to our scientific collaborators. That is all for my presentation. Thank you so much, and a special thanks to Dr. Kastner, Dr. Frugers, and my project lead, Justin. Thank you. So, in terms of the challenges that you faced, Nathaniel is here, I hope, otherwise I'm just going to ask this to no one, huh? Oh, yes. Oh, yes. Excellent. All right, good. I didn't check before I asked this question. So, what were your toughest challenges that you faced this summer doing this project? So, the pipeline kind of involved a few different models that they had found. One of the ones was a mega detector, which is a model developed by Microsoft for conservation efforts. And it's kind of the same use as what we needed it for. It would draw bounding boxes around images around objects of interest for an image. Yeah, so getting that to work with the rest of the model is kind of difficult just because there are different processing steps. And also just the differences in the libraries they were using and what we were using. I was really trying to focus on making the output for the researchers while organized because I noticed that all the folders that they had their data in were organized by location. And I think even different camera traps. So I was kind of very meticulous about that. Yeah, that's really important. Over often overlooked as well I think in computer science is making sure that things are easy to understand and organized well documented well. All right, thanks. Daniel, let's go on to the next project. The ocean is a magnificent place, home to a majority of all life on Earth and covering over 70% of the Earth's surface. There's always something to learn from the sea. The definitions of coastline vary, but it is agreed that hundreds of thousands of miles of coastline exist on the Earth. Scientists believe that if all of the wave energy along the coastlines of the world is harnessed annually, it could satisfy the entire world's electricity for that time period. Understanding the ocean could lead to scientific developments in many different fields. However, there's a lack of resources available to provide measurements near the coastline. Smartfin is a solution to that issue. Smartfin is a long board surfboard fin that is capable of gathering much data through its sensors including if it is in the water or not, the temperature of the water that's in, its location and acceleration. From these sensors we will be able to reduce much other information such as wave height. With the Smartfin, scientists will be able to gather denser data from many beaches, produce more accurate wave height and water temperature forecasts, find out where, when, how long people are surfing and more. Through experiments conducted by the San Diego SRIP spear, we have been able to compare the Smartfin's temperature readings with those of the spear and also analyze its GPS sensors. In the future, we will compare wave height readings determined by the Smartfin's accelerometer and algorithms to the spear's wave height readings as measured and analyzed through its pressure sensor. Data we got from the Smartfin was in an encrypted format, as you can see here. We had to implement a decoder in Python, which was able to produce our data in a table. We also further optimized our decoder by incorporating data analysis methods. Therefore, we were able to graph things such as temperature histograms and temperature over time and GPS sensor measurements. Our Smartfin produces inertial data such as acceleration, gyroscope and magnetometer. We are currently working on determining the exact position of the Smartfin. In order to get position, we would need to double integrate our acceleration. And along with that, coupled with citizen science, there comes a lot of noise in our data. Therefore, we are implementing a Kalman filter, which uses linear values from our sensors to predict the exact position of our Fin. This flow chart right here represents our programming logic and we hope to add more angular measurements such as gyroscope data and heading to our program. So far, we have values for our transition matrices and we have a working code on Jupyter notebooks for these linear values. And we will be optimizing this with process noise and we will also work on common smoothening in the future. We are using spectral analysis to determine the wave height from our acceleration data. Spectral analysis is a function that provides information about how power is distributed by analyzing the frequency domain of a function. A Kalman filter will determine the vertical displacement, which spectral analysis will then process into wave height. Fourier transforms are used to show the frequency domain of a function in the time domain or vice versa. It's very useful for sine functions, which are what we use to represent waves. As you can see below, there is a combination of sine functions expressed as a few frequencies in the frequency domain. We currently have code that is capable of taking CDIP vertical displacement and producing accurate significant wave height graphs. And we are using HM0 as opposed to HS. HS is the average height of the top one-third of the waves. HM0 is a little bit more complicated than that. It involves an integral. None of the work we accomplished would have been possible without E4E and our amazing supervisors, Ryan Castner, Kurt Surgars and Nathan Huey. Thanks to their involvement and the help of our team, shown in the picture above, we were able to accomplish much this summer and will accomplish much more in the future. Thank you. Okay, thank you. So why don't you tell us about what you hope to accomplish in the future? I know someone is there. I saw Nick and Mallin. Yeah, basically this summer, we, you know, we had a lot of data that we got from our smart fan earlier. So we built on analyzing that data. And like we mentioned in the video, we worked on common filters to help like reduce noise and all over data and get better estimates of the position. And with regard to spectral analysis, I think Nick and Ted and other other team members have been working on that so he can expand on that. Yeah, for special analysis, Nima, would you like to speak about that? Sure. So first, our spectral analysis, we're able to work with CDIP data, but we need to be applying this more to our actual data collected by Smart Fin. So we weren't really in San Diego and weren't able to do too much like actual testing. But with the CDIP data we have, we're able to apply different windowing functions to make our HMO, as was said in the presentation, more and more accurate and similar to the significant wave height that CDIP is calculating. And so as we try more and more windowing functions and, I mean, explore more ways to analyze the data, we can hopefully get this to be extremely accurate next year. Or I don't know how the timeline goes, but soon. Thank you. Thank you. All right, so let's definitely have 2 more. So let's continue on. For the conservation and management of animals and their natural habitats, the use of radio telemetry tracking is a common and reliable method. E4E's radio telemetry tracking project started in 2013 as a collaboration with the San Diego Zoo's Wildlife Alliance. Because traditional methods of tracking animals on foot are time-consuming and physically taxing, the radio telemetry tracking team worked to produce a drone-based system that was intuitive and user-friendly. This project has had deployments on the island of Little Cayman and Big Ambergris Cay in the Turks and Caicos Islands. This summer we have collaborated with Dr. Matthew Gifford at the University of Central Arkansas who wants to track common collared lizards. They want to track common collared lizards in a fixed area and produce results every 10 minutes. And because they want results so frequently, having a drone fly every 10 minutes is not a viable option due to time constraints and battery limitations. Our current proposed solution is to develop a system of stationary towers to track the lizards as this would allow for a more long-term setup. One method proposed to implement in our system is the time difference of arrival or TDOA technique. With TDOA, we have a transmitter broadcasting a signal and multiple receivers picking up that same signal. Because we know that electromagnetic waves travel at about the speed of light, we can use the differences in time between when each receiver picks up the same signal to estimate where the transmitter is located. One of our largest challenges with TDOA is timing synchronization. This is because before we can compare timing between received signals, we must first ensure that our systems are properly synchronized, otherwise our results will be skewed. In order to find the time delay between the systems, we can use one of the receivers as a beacon and have it send a pulse to the other receivers. We can then later align the signals with that received pulse to be able to properly time signals received from the transmitters. Spatial accuracy using TDOA can be approximated using the equation distance equals speed of light times time synchronization error. Our system uses a sampling rate of about 2 MHz and because of this, without subsample synchronization, we can only achieve a spatial accuracy of 150 m. Using binary search technique, we are able to achieve subsample accuracy between two identical time offset signals, which would theoretically allow for less than 1 m accuracy. However, in reality, noise and differences in signal strength are certain factors that must be dealt with, so moving forward, our team will work towards a more robust synchronization solution. In the future, we have several collaborations and deployments planned. Along with tracking common-colored lizards in Arkansas, we plan on collaborating with Gus Calderon at Airspace Consulting and the LESU to track pandas with the drone-based system in Chengdu, China. There is also opportunity to track iguanas in the Turks and Caicos Islands during summer of 2022, which will also be a drone-based system. Great. Thanks, Mia. So I think I'll ask probably this very similar question than I asked last time. So what were the challenges that you faced at this work? Because we had this existing project that we've been using for a long time on a drone for localization, but then we had this new opportunity for the more stationary tracking. Expand a little bit on what you had to do in order to change the system for that? Yeah, so we considered a couple of model changes. So our drone system uses a received signal strength indicator model, and so RSSI. So we had, we have the options either use an RSSI model or we were looking into seeing if the TDOA model could work out. So most of my, so we started out with like simulating for TDOA, and our biggest issue so far has been finding a solution that is sensitive to noise, or that is resistant to noise. So here we had an issue finding a whole lot of like literature on this sort of subject. Normally the solution to getting a low time secretization errors to getting equipment that can have a higher sampling rate, which is a very expensive solution. So we're trying to find a more signal processing solution. So that has been most of my, that has been the largest challenge that I've faced this summer is trying to get past that. So it seems like going forward, we may just focus on an RSSI based solution and then maybe explore TDOA later when we have a bit more bandwidth and people working on it. So, we'll see. Great and you want to talk real quickly about the panda tracking with the zoo. So that was another exciting thing that came up this summer. Oh yeah, we actually got to just dropped off the drone yesterday it's huge. It's like 15 pounds without the batteries. Yeah, so that's contacted us and Nathan about his thesis. Gus specializes in creating drones and previously I believe that him and the people that he worked with have focused on like cinematography and like making documentaries. And so he has been working with the LA Zoo to track these pandas in China. And they seem to have a like tracking solution already sort of working so a lot of in the future, or in the more immediate future we'll be using them with UI changes but we also have plans to set up our system and test it on the drone. So, that's very exciting. Thanks man. And I see trauma professor and win asked a question about publications and presentation venues. We encourage all everybody or we talk a lot about publications. There's, I think maybe 20 or 30 that we've had over the years and a number of different places. Some are, are you focused or research undergrad focused others are, for instance, oceans was a conference that we had a publication from this summer with some of the students. And so I believe they're on the website somewhere I'll try to find a link, but you can see the publication so far today that we've had, hopefully a few more from this summer that the students are focusing on. Okay. One more project I think and then Kurt will take us. Thank you Jacob for putting that link in there. I eyes are a species of lemur found native to the island of Madagascar. A specifically famous I I is Maurice from the movie Madagascar. However, since 2016, these adorable creatures have been endangered as a result of habitat destruction and hunting. They were actually killed in some areas due to the belief of harboring evil and bad luck. Now I eyes are not only protected under the law, but some are maintained and nurtured and closed captivity is such as at the San Diego Zoo Safari Park. In the fall of 2020, the San Diego Zoo Institute for Conservation Research reached out to engineers for exploration. In order to promote the overall health of the eyes, the goal of this collaboration is to determine the variations in their sleep patterns through the use of a sensor network and data analysis, consisting of various computer vision and machine learning techniques. This sensor network is noninvasive and will be placed around the eyes enclosure and so having minimal impact on the animals. If pilot deployment in the eye enclosure shows good results, the future system may be deployed across the entire zoo to monitor the other species and their care. This sensor network is composed of four units, the remote sensor unit, the on box unit, the data server, and the router. The remote sensor unit consists of an IP camera, Raspberry Pi, and screen. And as seen by the red figure representing the general field of view of the camera, this unit will be utilized to view the entire eye enclosure. Next, the on box unit is composed of two boxes. This box, located on the top of the eye eyes nesting box, is composed of a Raspberry Pi, Arduino Nano BLE Sense, Pi infrared camera, and microphone. It is important to note that for the sake of demonstration, the components are displayed floating in the box. However, there is a 3D printed housing unit to support them. In this box on the side of the nesting box, there is a screen. And this represents the conduit housing the power and HDMI cables between the two on box enclosures. The screen as well as the other one in the remote sensor unit will be utilized by zoo keepers to periodically monitor the sensor network and video streams. In regards to the types of data and how they would flow, there are two sources of video streams. The IP camera and the Pi infrared camera. There are also two sources of audio data from the Pi microphone and Arduino Nano BLE Sense PDM microphone. Along with the microphone on the Arduino Nano, it will also capture IMU or inertial measurement unit data such as acceleration, gyroscopic, and magnetic field data. This summer of 2021, the II sleep monitoring team is completing an initial model of the system and sensor network. Each of our team members have collaborated and worked hard on various components such as configuring the graphical user interface for the screens, managing data classes, building II proof housing units for the electronics, and testing potential computer vision techniques like optical flow and background subtraction to use in the future. As of this September, we are planning to run a pilot development of the completed system in the San Diego Zoo's conservation research lab for a few weeks. After testing, the system will be deployed in the II enclosures. Thank you for watching. If you wish to learn more about this project, you can contact Katie Miyamoto myself or you can visit the eFury website for more information. Great job. So can you tell us a little bit more about when you think you're going to get your technology into the zoo? Of course. We right now are a lot of things kind of have changed a lot, even like the video itself. I won't go into that. But we are planning on doing like a live demo for Ian specifically and just to make sure that the whole system itself is working. So that'll be in C lab on campus September 8 is what we're planning. And then do you like Katie, would you like to explain who Ian and C lab and what those things are? Yes, of course. So Ian was one of the people in the video you saw and he is our collaborator or our advisor from the San Diego Zoo and he's a senior researcher there. So he is the one, I guess, asking for this project in a sense and he's working with us really closely and he's been really amazing and fun to work with. And so we're going to work with him for a live demo just to make sure that the system is working and that is flowing appropriately. So that'll happen second week of September. And then two weeks after that we're planning on placing the system in his lab for further testing. And then after that, and after we debug it and fix it and make it even more perfect, hopefully it'll be going into the enclosure. So we still have a bit of a way, but we're getting there. Great job, Katie. Thank you. I'm amazed here, but good job with you. Thank you. Okay, Kurt, I think that's it. Yeah. Well, first of all, I wanted to kind of invite everyone to give a virtual applause for all our students. And I mean, they did an amazing job. I really enjoyed all the videos. But just beyond the videos I mean everyone did an amazing job this past 10 weeks, especially given all the other challenges right so this was kind of a remote slash hybrid program and everyone had to work with basically just like last minute changes or work remotely. I mean, this is not easy, right and hopefully you kind of appreciate like how much they were able to accomplish. Despite all these challenges, right, coming on campus last minute having to having to basically work together to these zoom meetings. And hopefully this also shows like what is possible right if you have dedicated students who really are willing to work hard at this so definitely thank you for all to all the participants this summer. Absolutely super impressive. You're setting the bar really high for next year. So, so if there's anybody here who wants to apply next year. You see, what is what is broken do what we challenge you do. I would also like to thank again, all our sponsors are the people who have provided support all the other collaborators. The ones that were mentioned and the ones in the videos and the ones who are supporting projects that weren't featured this summer. We really appreciate all your support. We can do this without you and hopefully you realize kind of what kind of an impact you're making not just in your field but also for the students here and for these future engineers and computer scientists and hopefully also people who will change the better right through technology and their skills so thank you for that and thank you for for being here with us showing interest in these projects. And I would like to invite you to to stay engaged to stay involved so if if you are somebody who has a problem but you feel like this this bit within Internet for exploration please contact us right please talk to us. If you feel like this fits within kind of something that you may want to collaborate or sponsor or fund or whatever. If this is of interest just reach out to us reach out to the project leads we're always looking for more ideas we're always looking for more collaborators. And if if you or or your friends are interested in joining us as students if you want to work on these projects please please advertise we're always we always have positions open we will do a recruiting push at the start of the quarter and we do one at the start of every quarter. So we really want to kind of bring these opportunities to as many students as possible. And so that requires a lot of a lot of work a lot of a lot of engagement from from collaborators from students and but if you're if you're asking if you can help as well, you definitely can and you can be involved so please reach out and that's basically what I want to say thank you all. There's there's a lot of moving parts here and these these amazing projects can't happen without all of you. So yeah, thank you. And I don't know if there's any last questions and last words but I hope you enjoyed this video that's what I did and we will hopefully see you later.