 Jason Holmberg! Jason, are you there? Hi everyone. Welcome Jason. You have the big responsibility of being the final keynote speaker of this conference, of this edition, our 10th anniversary. So no pressure there, but we're all ears. Looking forward. Thank you. I think we have some really fun ground to cover. So, hi everyone. My name is Jason Holmberg. I'm the executive director of a very unusual nonprofit organization called Wild Me here in Portland, Oregon. Now, today's talk, we're going to cover some really fun ground, but there's a point to some of the small tangents we'll go off on, but we're all going to be coming back. So we're going to talk about artificial intelligence in a very meaningful and applied way. We're going to talk about really big sharks. Along the way, we're going to touch upon leafy and weedy sea dragons in Australia, and we're even going to talk about some intellectual inheritance from the Hubble space telescope. So space telescopes, sharks, AI, all fun and good. But more than this concept of endless exploration, the main thing I need to get across to you right now is that the world of conservation is underfunded and under supported. Whether it's combating global warming or fighting the sixth mass extinction, participants in this fight don't have the support they need. But while that's a problem, it's also an opportunity because I guarantee in this audience, there are professional skills that can help. Whether it's machine learning engineering, software engineering, writing skills, experience in management and fundraising, there are new opportunities to not just volunteer in a small way an hour here and hour there, but volunteer in a meaningful way. Now I'm going to exemplify that by telling you my story, my journey from the commercial space into the nonprofit world. And I see that in my journey that there are clearly opportunities where you can help out with skills you've developed in your careers. So let's get started. So, my background is very unusual. Most of the time when you come to an artificial intelligence talk, it's going to be the foremost expert in artificial intelligence. Or if you come to a talk about big sharks, it's going to be Professor so and so of this prestigious university. I'm very different. My background is very unusual to be giving a talk like this. My background is in chemical engineering from the University of Michigan and in Arab studies from Georgetown. Yet another shift, I actually did a lot of my career to date at Dell Technologies, where I started out as a technical writer, moved into the role of information architect helping to sort of manage all the different pieces of information that go into developing documentation for a product. And along the way develop these skills in writing software development and then near the end of my career data management and translation. Along the way, as I'm going to tell in my journey. I started the seed and then grew something that became wild me the nonprofit that I am now the executive director of. But a lot of it came about by the application of really professional skills to try to do some good. So, let's get to the big sharks part of the talk. This is the whale shark Tiber on by Anna, some of the fish and Arabic world's biggest fish. Males get around about 10 meters, emails can get up to 20 meters. They are often males and females are often cited in separate locations at separate times. And despite the fact that this fish is massive and can swim long distances. And that has never been witnessed mating has never been witnessed. And in fact, for all the sharks that we've studied and we've tagged over 10,000 now, we've mapped exactly one shark doing one migration route, which was a circle around the That's how much mystery there is. Let me throw one more fact that you whale shark pups and females give birth to up to 300 are cited, literally one sighting per year across our global database. So wherever they're giving birth wherever they're meeting wherever they're migrating to despite how massive this creature is, it remains a mystery. So let's dive with the whale shark off the coast of Djibouti, something in me seeing this large shark just lit up, and it sparked some curiosity in me. Later that same year this was 2002 I went on a research expedition to tag whale sharks. This is the old world pre computer vision tag them with a spear and a placard tag. And this really got me thinking. What is this animal in danger, you know, if I participate in research. What does that mean what is this curiosity I have how do I help. I had all these questions floating about in my head, and they sort of coalesced into this basic concept, higher level than the way from the whale shark a little bit of, how do we know that a species is endangered what does it what does it even mean to be endangered and and when I began this journey I honestly had no idea. What does it mean now here we see that the whale shark is listed by the IUCN as in the red list as endangered. So a lot of research has gone into, you know, helping it get that status getting a peer reviewed and this helps us understand how endangered, but really endangered comes down to the idea that there are fewer than there used to be. And if we don't do something, there may be none. So, how do we arrive at that determination of endangered and figure out whether it's this particular species is more or less endangered this year or next. So one of the solutions is to count them. That's a great way to see whether you have more or less year on year. The other problem with that is actually counting them. So, we're going to cover a number of species because my nonprofit works a number of species and one of them is zebras. So if you look at this photograph, really pretty photograph taken in Kenya. How many zebras are there, can you do a quick count. If you look at the estimate, there are maybe 14. But if you see a few two headed zebras. It can look that way. So a researcher approaching this photo is automatically going to make an account of how many there are that count may range from 12 to 15. And this is only one photo. Imagine having to do this for hundreds for thousands of thousands. This is an easy case. If you want to do an aerial survey, because a lot of animals don't wander near the road or wander near your boat. And you need to go where the animals are to count them. You get a picture like this. Now I'm pretty sure that for most members of the audience, if I didn't draw for yellow boxes around elephants on the left. We may not see any elephants at all. And yet there are at least four, and we may have missed a few. This is real world conservation taken by a plane flying over the savannah, trying to count the number of elephants. Now, so one of the solutions that researchers came up with along the way, not only because animals are hard to individually identify, but also if we go back to that herd picture where animals could be moving from photo to photograph and aren't standing still, you know, how do we track and count accurately individuals? One way is to tag them. And this is still still used out there, although my organization is working to replace this. So whether you're looking at a deer, who we can tag on the ear, or a turtle, which can be tagged on the flipper, or a bird, which around their foot can be have a band applied. So you can tag an animal and give it a number. Now, what's the point of this? The point of this is that if you recite the animal, then you can actually count it again and know it's the same individual and that you're not double. And if we contract individuals, counting when they appear and don't appear over time, say over five or 10 years, we can use this in a number of statistical models that feed into important areas of science. One is population dynamics, and we run statistical models to understand how many animals do we have? Is that more this year than it was last year? Molecular ecology. Animal biometrics. Toxic ecology. Does this individual at certain sites have more or less of a toxin? And then can we identify those sites and bring them up? Social ecology studies. How do the individuals of this species interact with their social roles, etc. So this technique is very powerful. However, it's also very fallible. So I've already played a dirty trick on you, which I've already shown you a picture of a whale shark with a foul tag. I was stepping back to my journey with the Shark Research Institute bobbing up and down in a boat in Mexico on my first whale shark research expedition where I was talking to a biologist who had a spear and a tag much like you'll see in the upper right here. I asked him, when you spear this shark and you stick this tag on this animal, what percentage of the time do you recite that tag? And he said about 1% of the time in the same year. If you were talking about a shark that can live over 100 years, likely past my lifespan, and we can only tag it for the period of less than 12 months. So these long term ideas of tagging an individual over five or 10 years and creating accurate population models, the system doesn't work. And in fact, I like to say sometimes that the sea eats everything. And you can see at the bottom in the upper right there, barnacles on a broken tag. So to make sure itself, you can see a tag so fouled up, and this is an animal that was tagged and then I photographed off Honduras, so fouled up that I have no idea that anyone put any effort at all before into tagging this animal, completely inefficient process. So, I wasn't the first to think about it, but I was able to partner with those who started thinking about it early. And there was this idea that, okay, there's a couple of problems with physically tagging animals. Two, tags fall off or get fouled. Two, tags can cause injury and infection. In many cases were altering the weight of the animal which could affect how it could fly. We're breaking the skin of the animal which can open up avenues for infection and even death. In some cases we may be doing harm. There's a lot of species at all, but many, many for which each of them present individually differently. They have different sets of strikes for each one. They have different patterns of spots, whether it's cheetahs or whale sharks, etc. The visual patterns can be thought of much like a human fingerprint. And in the modern world, we have cameras everywhere, whether it's in a DSLR mirrorless camera as we go on a trip or even the camera phones we have in our pockets which are so amazingly powerful. So here in the upper left you'll literally see a picture of some of my team members working with some zebra researchers in Kenya discussing this concept of can we replace physically tagging an animal with digital photography so that whenever we see this, whoever we see it, whoever sees it, we can take a picture and simply re-identify the same individual over time without having to get in its way, interrupt its history, or even touching it. Here's the problem though. Animals don't nicely sit still, they don't pose for the camera, and they are often cited in places that are hard to get to, too hot, too cold, too deep, not near a road, etc. And even when they show up, we have to deal with the environmental conditions and the presence of the animal. This is a bottlenose dolphin photographed off of California. On the left you can see a dorsal fin breaking the water as the animal is swimming, and you can see the notchy trailing edge. That notch is very unique to this individual and could be used as a visual distinguisher of which animal is this. On the right you can see exactly the same animal. In a subsequent photograph, different lighting conditions, different water conditions, the lighting shows different scars. That trailing edge, because the animal isn't perpendicular to the camera, that trailing edge puts the animal off perpendicular and makes matching that edge very difficult, easier for the human brain, less so for a computer vision model. Now, that also was the easy case, because it was just two photographs. We were, you know, asking a binary decision, is this the same animal? But what happens when you have hundreds of animals, thousands of animals, and for each of them you have multiple photographs, so potentially tens or hundreds of thousands of photographs, my team is currently managing 6 million photographs across our databases. So now this idea of who is this becomes a really time consuming exercise to go photo by photo, and researchers will do this. In at least one case, for one species, I know of a research team that will spend her photograph collected while in a boat on the water, nine hours going through a 40 year historical catalog, trying to identify which animal is this. That's an average of nine hours. So what's interesting here is, we know that computer vision exists as an academic technique, we have field biologists who are able to collect photographs, but we essentially have an unscalable process of manually identifying the photographs. So here really is an example of where professional skills, whether it's engineering or data science data management come into play. This is a scalability problem. Professional skills are exactly the kind of skills you need to solve a problem like this and to do it repeatedly. So my journey began in whale shark research with swimming next to this whale shark off the coast of Djibouti, and then later in Mexico, talking to the biologist about the unscalability of spearing these animals and saying, Okay, it has been proposed but never proven that whale shark spot patterns can we use that and just simply rather than spearing the animal, take a picture of it. And any other researcher or any scuba diver or snorkeler can take a picture of it, and we can then match it based on the photographs. So I spent time with a whale shark biologist and a NASA astronomer. And here's where we touch upon the Hubble Space Telescope. It's an algorithm never designed at all for the wildlife field, whose job it was to take multiple pictures of the sky as taken from the Hubble Space Telescope and begin to stitch them together to create larger and larger mosaics of the night sky, based on the occurrence of common star patterns, which could then be scaled, overlapped and connected. Well, stars, white spots against a black background aren't that different than white spots against a brown or blue background on a whale shark skin. And we were able to adapt an algorithm originally developed for star patterns to map spots on a whale shark. And then to take that spot pattern, create a system of triangulation to fix those spots in a certain orientation in place pattern and compare those patterns very rapidly against a very large database. The fun part was that we started this project looking for a way to repeatedly re-identify individual whale sharks. However, that was only 10% of the problem. 90% of the problem was the fact that there was no database of data to even validate the study. So we had to create a system whereby multiple researchers, all of whom had small, incomparable catalogs at the time, could pool their data in a common data schema, common database, and from there, execute the computer vision algorithms to begin matching whale sharks across their migration routes. We were creating a platform that was originally called whale shark.org. It is now sharkbook.ai because we do more than just whale sharks as a species. And we found that we needed to create an open source cloud based package to both allow researchers to pull this data and execute the computer vision algorithms. One of the problems we find in the conservation space is there's a very wide gap between promising algorithms and techniques coming out of academia and especially different domains as we've heard from other speakers. And here where we're talking about an astronomy algorithm being cross applied to conservation research. There's that divide between what is produced in academia that's novel and promising but not applied promise not delivered. And field conservation where we have conservationists who are locally embedded they know local culture language. They know how to argue for protection for wildlife but they're non technical running a Python algorithm which doesn't have a user interface and runs off Linux command line is generally not a part of their skill set. They could learn but it's in addition to all of the other duties that they have. So we filled this gap by creating an open source package that is now called wild book that allows for the pooling of wildlife data especially photography and the execution of computer vision algorithms to begin cross comparing catalogs and tagging animals based on photographs. We started with whale sharks, we went to manta rays, and now we've cross applied it to a number of different and interesting species, largely because we're finding that as we solve problems for research into one species. We're seeing the same problems replicated in the biology communities for other species. One of my favorite projects applying while while book is see dragon search dot org, studying leafy and we DC dragons off the coast of Australia. It's for a small research team who are successfully engaging hundreds of divers and snorkelers across Australia to report sightings of some of the most beautiful creatures on the planet, very delicate, and just, you know, very memorable instant, you are able to swim and see with these. We apply computer vision to the body, find the head, and then match the heads across large volumes photographs to see if the particular snorkeler or scuba diver has seen this individual before, where and when and feed this data into these population studies. We're able to take that platform start with species that are dated efficient at grow the data set and begin scaling up to not only serve one species one research community, but potentially hundreds of researchers like we do with the fluke book dot org platform, also leveraging our wild book open source, where we're able to provide computer vision for 20 plus species fully automated identification to identify individuals based on their flukes or their doors. And we're able to create this as a standard for pluggable computer vision, meaning that because we have access to the data and a predictable API. New techniques in academia can be bolted into this platform and immediately applied by a large research community. Similarly, they have the data to feed back to academia to support the development of new techniques. The ecosystem, whether it's for one species or many generally looks like this, where we have wildlife being photographed by members of the public by citizen scientists who are more dedicated, they're out there every day every week every year collecting photographs of the same species, or on the left biologists who are dedicated to the study of that species publishing and hopefully together as a community advocating for conservation solutions. So we work with data scientists who can data mine this collaborative database, and the collaborative database is stored in a data management server where data can be exported to analytical programs. And as well we have an image analysis server. So each of these wild books, whether it's short book AI C dragon search or fluke book is really a pair of servers, one handling data management and user interface, another handling computer vision and machine learning. So that machine learning. And we started this journey really before the modern creation of deep learning. We started out with spot pattern recognition, and now have moved into the world of deep learning because it is also, as in so many fields it's revolutionizing your vision and helping us solve wildlife problems, especially the scalability around finding and identifying individual animals in photographs. So what do we apply machine learning in one is animal detection, which is given a photograph or 1000 photographs, find one or more animals in them predict what species is it predict the viewpoint left or right, and then take that bounding box, the species prediction, the viewpoint prediction, and even some background subtraction let's remove some pixels of trees, the ground water etc, and hand that on to an identification algorithm. And there are many different ways to identify individual animals across the species. This is a humpback fluke. It's a very identifiable example each fluke has this white and black patterning looks like a little bit like a roar shock test. In this particular approach which is called hotspotter in the bottom left and bottom right. You can see a computer vision algorithm, taking two different photographs of the same humpback whale and finding visual texture commonality and suggesting those back to the user as the justification for a match. Because in computer vision, when we try to help users rapidly get through large data sets to identify and track individuals in the journey to modeling populations and proposing conservation strategies based that are that are data driven. Users still make this decision, they look at the evidence before them, anything suggested by the computer vision algorithm such as rank visual evidence they make a decision yes or no, is this the same animal. There's other evidence that can be used. Another way of looking at whale shark, excuse me, humpback whale flukes is to use that trailing edge, the notchy part of the fluke as it comes out of water, and we can look at that, exactly like a sound wave. And in fact, some of the early matching algorithms deployed for matching humpback whales treated this as a sound wave, much like Alexa, or any of your other connected devices, try to match words in the phrases you say to them. And then as we move into the world of deep learning, we actually get to a really fun and an interesting part of the journey where we don't necessarily know what the model is came in on. When we put a deep learning algorithm for individual ID to task on a set of data. Let's go to look at the whole photograph. Is it using the trailing edge to suggest individuality. I don't know. Is it using texture. Is it using both. And so a lot of our journey in computer vision has moved from the deterministic to now training machine learning models, more worried about the data and the training and the representation of individuals within the data. And so we're getting these deep learning models that are in fact more accurate and faster, but as well very data dependent, and sometimes hard to interpret exactly how they're making this decision. But once again they're suggesting it to users and it's still up to users of her software to approve the match and say, I agree. What I've demonstrated here is that through my journey there are new opportunities for applied skills and experience experience coming out of day jobs the corporate world out of other fields and disciplines to be applied into conservation research. And that's really where wild me came about, starting with my journey, and then co founding a nonprofit organization where I'm now executive director and hiring dedicated day to day professionals not volunteers but individuals whose job it is to help fight extinction. Stay in and day out as their day job. Our organization currently has seven software professionals and two machine learning engineers based here in Oregon. We have this wild book open source platform and are developing new open source platforms to handle other modes of conservation research. We like to say that conservation is do the work and wild me does the engineering. We have this experience from the corporate world and are now applying it in conservation. However, we're very humbly aware that wild me is not saving single animal. We work in support of conservation biologists locally embedded in the local language and culture and conservation environment to understand the problem. We support them in creating data driven conservation strategies that can be locally applied and locally successful. Our job is to iterate fight extinction. What do I mean by that. Even if we took computer vision and machine learning out of the picture and went back to the old world of physically tagging. One of the structural problems in the industry in conservation research, especially when it relates to wildlife is the fact that we can't iterate fast enough to figure out if conservation strategies are working. In 2009 I published a paper talking from 2003 to 2009 about the population trajectory of whale sharks in Ningaloo Reef in Western Australia. That paper those models have not since been reproduced, even though it's a very straightforward task to do that. One of the structural problems here is that population estimates for wildlife are produced every five or 10 years, if you're lucky. How do we figure out whether a conservation strategy is actually helping a population if it takes us that long to figure out whether we've had a change positive or negative under that conservation strategy. As engineers, our job is to take these studies long term and to continuously iterate them and make that time between population estimates from years to months or even weeks. And the solution may be to put up a fence or take down a fence, it may be to ban fishing or allow fishing. Whatever the decision we need to enable in the field on site conservation practitioners to have the data and the analysis rapidly to help figure out whether a conservation strategy works, or is actually not helping at all. There's this gap between promising technology and successful scalable field application. It's a funding gap. It's a skills gap. And it's an experiment versus engineering experience. Academia produces amazing experiments and very promising technologies, but engineers scale them and engineers support them over time. So our solution and the approach we take as a team is to develop continuous monitoring and estimation of wildlife population. And this is all about scaling and modernizing wildlife research. When we're monitoring animals continuously. This is the wild book. We're studying or we're tracking currently over 131,000 individual animals across over half a million sightings and supporting about 900 researchers for marine and terrestrial species across the globe. And the idea is that we get data in as fast as we can. We process it as fast as we can answering. Individual animals are in these photographs and feeding that data into these collective collaborative studies that span disciplines that span borders that span data sets consolidate this and provide faster iteration for photograph animal species. We also flip that mode. So for species that we don't have continuous monitoring, we can do rally style events. And here I'm talking about the great Grevy's rally, which is a Grevy's zebra accounting rally that occurred in 2016, 2018, 2020, and again is 2022. Well, we can say we don't have continuous monitoring every day. But what we can do is send hundreds of participants out into the field for two days and just photograph every single zebra we can find. These are volunteers. They don't know which zebra is which they collect about 40,000 or more photographs. We take those back into the lab. We allow computer vision to track individuals photographs and create population estimates for the Kenyan government. This is a very focused mode of operating. As well, we're moving into the aerial surveying and assessment realm, because this is a great area where machine learning can scale and modernize. When a Cessna plane is flying a transect back and forth multiple transects. In fact, back and forth across the savannah trying to count elephants. The current state of the art is to have human observers reporting on clipboards. How many elephants, how many wildebeest, etc that they see. It is error prone because humans are involved humans fatigue. It's a long day to do this. What we can do is mount cameras on planes, collect hundreds of thousands of photographs. Allow annotators in the lab to draw bounding boxes around individual animals and train machine learning models to replace. This is the basis of reviewing camera data and in fact make what we hope as we benchmark this new technology or more accurate assessments of population size based on aerial surveying, which in itself is a challenge. I showed you previously a photograph of elephants from the air, which really showed how challenging the problem is this photograph here with the pink bounding boxes of elephants would be an easy case. This isn't hard to count. It's when elephants are under trees. It's when baby elephants are next to their mothers that the counting and deduplication, especially across multiple frames of photography probably becomes challenging. And as as in every industry that's tiptoeing into artificial intelligence. It's this door to a little world where we're beginning to change how we operate because we have this essentially non human participant, whether it's just a predictor or actual participant in the research community. We have this non human. So one of the areas that we're working on right now is this graph identification system. I mentioned before that as we run machine learning as we find animals and images, and we suggest to humans, which zebra is this which whale shark is this, it's still a human decision to approve that and say yes, this is the same individual give it the same number. So let's move on, and then to run population estimates based on those human approved decisions. What if we created the concept of virtual zebras. What if we took a cloud of photographs, let's say 250,000 photographs collected all of across all of our great Grevy's rallies across the years. And we said there's going to be no human decisions, or maybe minimal, let's say hundreds of humans decisions because 250,000 photographs all the comparisons of all the zebras in there would run up into the millions. What if we say we'll allow a few hundred decisions, or no decisions at all, and we allow the system to find every zebra and every photograph, create a pair wise relationship between each annotation of each zebra and each photograph and data mine its way to a population creating these virtual zebras that can be used to understand this massive data set before us, and to ideally create population estimates if we can benchmark them correctly, that are faster to achieve, meaning weeks or even days versus years faster to achieve and even ideally more accurate than human approved decisions and manual population model. We've also had a pretty fantastic success in creating intelligent agents. So an intelligent agent is a piece of AI using computer vision and natural language processing that interacts with YouTube posters, and we successfully deployed this a couple years ago. And the idea is that people are going on vacation and seeing whale sharks all across the globe. And this is a potentially untouched data source. So what we did is we went up to the YouTube API and we said in five different languages, English, French, German, Chinese, and Spanish, find all the videos that are tagged titled or described with the word whale shark. And then give us a list. We're going to take all of that text, standardize it into a common language, make a prediction, does this describe a whale shark wild, or does this video describe a whale shark drawing video for season one episode of one of the kids show the octanots. If it describes a whale shark and wild, then let's take frames out of the video. Let's use computer vision to find the whale sharks in those frames. Let's use natural language processing to read the title tags and description and even use optical character recognition to find text in the video. Put that all together. And let's use AI to make a prediction of where did this happen was the Philippines or Mexico, and when did it happen. The frames out natural language processing is really, really good at those tasks. So if we have a picture of a whale shark, and we have where and when it occurred, we actually have a fully formed data point. And what we found is that, while the intelligent agent could collect videos, it actually collected more videos than were useful, meaning that it had a lower data utility rate, we threw out a lot more of its data than we did for a human research. And the interesting part, it collected more data than all of the human research community in the whale shark world find. And of the data that was useful, about 30%, 90% of that was not otherwise found by the human research community, meaning that this intelligent agent was going out to the public. And when it couldn't find where and when it was literally asking them questions and listening for responses in the chat of the YouTube video. It was actually interacting. And it was asking questions and finding novel data and contributing it back to a human research community. And while he isn't the only player in the space, while we do computer vision. One of the coolest projects out there and artificial intelligence run by Harvard, the Dominica sperm whale research project and others is project study, where they are literally trying to translate sperm whales, which is mind bending to think and the subject of an entirely separate set of discussion. Amazon is a company applying artificial intelligence to predict deforestation where it's going to happen and to take action before it actually occurs, which is brilliant. Along the way, in this application of artificial intelligence, and sometimes just cloud technology or writing skills, there's an opportunity to harness business skills and experience. Engineering, it's really back to fundamental principles. We're going to scale the problem. We need to make it supportable and cost effective. We need to define customer profiles, figure out what they need across these profiles and develop software in response. We need to consider our user base. We need to make sure that our software nowhere along the line is making a mistake that does more harm. Poorly estimating an animal population can make things worse, not better. We have alpha beta and test production servers. We have this danger of assumptions that we need to not make as engineers to ensure that we don't do this harm. And we're always aware that users are finding new use cases of our software and as engineers, we can't just leave them on their own. We need to actually try to support them as they're fighting the good fight. And we need to provide support. And this is why professional skills and professional attention to wildlife conservation, open source software, and programs is important. We cannot just throw open source or machine learning at the conservation community and expect it to stick and make this giant impact. We have to support. We have to answer questions and be there to make sure it is applied successfully. There's a variety of skills that are needed in modern conservation. I've been talking a lot about software engineering, machine learning engineering, hardware engineering for camera traps and drones is incredibly important. Lab management, just managing all the pieces that go into an effective conservation project, fundraising, grant writing, project management skills taken from the corporate world are incredibly helpful. We need to understand communication, get the word out. What did we find teaching webmaster support artists even taking very complex AI technology and very complex research processes and conveying them to the public in a simple way is incredibly helpful. We've got some pretty significant support from Microsoft H2O AI apology Allen Family Foundation, the Gordon Betty Moore Foundation who I want to thank specifically for helping me and wild me along in this journey. And I want to conclude by asking, how can your skills help discover and protect my journey has not been one of just volunteering. Conservation research has changed my life that has changed my career. And it's not a little thing. It's being a part of genuine discovery, finding out something about the natural world that no one knew before, and genuinely helping to protect very sentient, very beautiful, very important species across the globe, whether it's in the water or on land. Thank you very much. I hope you can find a role to help in conservation. Thank you. Thank you so much, Jason. That was a fantastic, very inspiring. With that last question, you leave us there hanging. There's talking about questions. There's one question I'd like to pose you from Emilio. He's asking you, Jason, can we also act in the behavior of the animals? So behavioral research, and I'm hoping that's the intent of the question is very important. Understanding behavior through the application of machine learning is a very new field. And also, one that is going to be revolutionized by the application of machine learning, detecting animal in a photograph, understanding its behavior across a video sequence is going to be very important. Clearly, if the intent of the question is to talk about can research impact animal behavior, the answer is yes. So we also need to be cognizant of the fact that we, as we collect data about wildlife, one, we don't, for example, encourage over photography by eager members of the community who can interfere with its life cycle, get in its way, mob it, touch it, et cetera, or along the way that we don't reveal, for example, location information that can be used for photography or for harassment. Okay. And one of the things you mentioned at the beginning was that conservation is underfunded. Maybe one of the reasons is the cost of technology. How expensive could this be? And could this be one of the reasons? And how do you get funded? Finance. Excellent question. So underfunding of conservation research is a societal and structural problem. It's not the preventative cost of technology. In fact, some of the best innovation happening in conservation research is happening because the licensed for profit equivalent is too expensive. And so people are in an open hardware and open software system innovating very, very rapidly to create low cost solutions that are actually more effective than if you just went out and bought the expensive piece of technology. So I really put underfunding of conservation research in the same bucket I would put of under attention to global warming emissions caps and lowering emissions. It's a social problem. And I do expect there to be more attention to this problem as things get more and more dire, whether it's global warming or extinction. But the question is whether that social attention is going to be too little too late. Well, we hope not. Jason and you also said that we can all do something. We can use our skills and we can also contribute. As you said, you're not using so much the pipeline for data from individuals now as you are more getting that data from researchers, biologists. If I understood correctly, but nevertheless, we can all contribute. I suggest people visit your website while me and read about your all the data and the information you provide there. Actually, there's a very interesting article on while me by Forbes from January this year in 21 where it says in 80 years, 38% of all species will be extinct if we don't take any action. This is quite big statement. Jason, is this correct? Likely, yes, unless we turn things around. I mean, the great part is the opportunity is there. It's, we're no longer talking about conservation research as this distant thing. When I grew up, I watched these mutual of Omaha documentaries about wildlife. And it was always animals over there in this far flung place being studied by people with PhDs. The fact of the matter is, technology has opened up an avenue for participation for a lot of us to be concretely involved, even if we're working remotely far from the animals but applying our skills to make a difference. As well as some of these modern technologies are revolutionizing how fast we collect data, how we get through it, the inferences we can make from that data and helping us to really understand the scale of the problem and maybe do something about it in time. So while yes, it could go very bad, I'm encouraged by the fact that we actually have better opportunities than ever before to do something about it. So I love my day job and being a part of this. It's not depressing at all. I go through my day with a sense of hopelessness. Very inspirational, Jason. And we hope so. I'm going to leave the audience with the question you pointed at the end of your keynote, which is how can your skills help discover and protect. There's so much we can do. This is the good news. And this is the big things conference. What is bigger? What is there bigger than this? So we couldn't have a better closure, a better keynote to finish this year's edition. Jason, thank you so much. Jason Holberg from Wild Me, the CEO. We hope we'll stay tuned and we'll keep taking care of the planet and taking care of what you're doing at Wild Me. Thank you so much for being the last keynote speaker of this year's edition. Thank you, Jason. See you very much. And congratulations to all the team at Wild Me for this fantastic job.