 Our next presentation is by Roger Fong and the Pigtera team, talking to a codeless deep learning interface to segment and count objects. Hey everybody, my name is Roger and I'm a machine learning engineer at a startup called Pigtera based in Switzerland. So to start things off I'll give you our introductory spiel. So Pigtera is a Swiss company created in 2016 based in Lausanne. Over the last five years we've established ourselves as a whirling solution for prototyping, building, and deploying high-performance customized machine learning models using the Pigtera platform at our core. We currently have more than 100 customers and 9,000 platform users worldwide to extract meaningful geospatial insights from Earth observation imagery using cutting-edge machine and deep learning. Our ultimate goal is to democratize access to these insights. So in a phrase, EO, Earth observation and AI for all. So let's talk a bit more concretely about what the Pigtera solution actually is. So I would divide the Pigtera solution up into four parts. The first part is imagery acquisition. So we can help you get automatic satellite imagery over your areas of interest to acquire drone data through our partner network or if you already have your own data that works as well. The second part is to train your own deep learning based detectors intuitively with our UI and you can do this with no coding whatsoever and no machine learning background whatsoever. The third step is to take these deep learning models that you train on a platform and deploy them at scale on our GPU-enabled infrastructure so that you don't have to deal with the hassle of dealing with any of the hardware yourself. And finally, you need to take these detections or segmentation results or whatever from these deep learning detectors and visualize them in some kind of dashboard. And so we'll help you with that part as well. However, for this presentation, we'll focus on just number two since we're a bit short on time. So the deep learning detector training. So this can be divided into four simple steps and each step is associated with a drawable element. The first one is training areas and what these are is basically these yellow boxes that can draw over your geospatial or Earth observation imagery and it specifies what parts of the image the model will use for training. So you can see here we've selected a few areas of this sheep farm to use for the deep learning detector. Next is to go into these areas and annotate or outline the thing you're looking for. So in this case, we're counting sheep. So we're going to annotate the sheep. Simple enough. Next is to preview the results in your testing areas. So these red boxes and the training of the model and the generation of these results in your testing areas only takes a few minutes. And that's really the key here is that you want to iterate over your data set to create an effective minimal data set. So you basically train your model with your training areas and annotations just a handful. You preview your results in your testing areas. You see where it's doing poorly and then you adjust your training areas or add new ones to improve your model over time. You can also get a score for your model if you draw these accuracy areas and then outline the sheep in these areas as well. And then basically the model will output detections and compare them to your drawings and output a score. So it could be a precision, recall, area, shape, accuracy, whatever you're most interested in. And so when the performance of your model is satisfactory, you can then run your detector at scale across the rest of your imagery and then proceed to put that into a dashboard of some sort, et cetera. So now we'll talk about a specific real life use case. These images are provided by University of Santa Cruz over this Anno Nuvo Island off the shore of California. And the goal here is to basically analyze the population of these pinnipeds, which are basically seals and sea lions and these birds, these cormorants. So you can see them here, here are the seals and sea lions and here are the birds here. So we went through the exact same process that we just showed you with the sheep, but for these animals instead. And from that, we were able to generate the detections of these animals at scale across the entire island. And this is just one image here, but actually, there's images across the entire year. So we'll see that in a second. So you can generate the stats here. And something interesting to note is that the entire process of training these detectors and running them at scale across all the images of the island took five hours total versus if you had a PhD student in the UCSC lab doing it themselves manually, there's up to 10,000 seals per image. It would take them, we estimated up to 18 days to count manually. And so finally, we have to actually gain some insights from the detections. So if you run the detections across all the different dates and then plot them in a graph here, you can already see that the seal populations are more active on this island in the summer and early fall and less active during the winter time. There's also interestingly this event here around the June or July, where the seal counts are strangely lower. And actually we discovered later that this was actually due to a boat that got too close to the island and actually scared away a lot of the seals. So we actually got some insight into some of the events occurring around this island as well, just by looking at this visualization of the results. Finally, let's talk a bit about the future. So we have these new and upcoming things called blocks, workflows and the ecosystem. So blocks are basically just a piece of code and that's it. And it could do anything really. It could acquire imagery from a data source, could run pre-processing on the image. It could be your own NL algorithm that you've developed in your research lab. Could create reports, run post-processing, et cetera. And then workflows are a chain of these blocks that when put together can produce more complex solutions customized to your needs. Because a key thing to note here is that what you really need is not just a detector by itself, you end to end solution to create insights going from an imagery to some kind of dashboard or visualization. And the detector that you build is just one simple block or one step of this workflow. And we hope that in the long run, we can generate what would call this Paterra ecosystem effect, where the collection of workflows and blocks shared by the Paterra community. So hopefully you guys will help us collectively push together towards the goal of democratizing Earth observation intelligence. And so as I said, these these features are currently in development. And so definitely if you just didn't get involved, please give us give us a call. As you can see, the Paterra solution and platform are super flexible. Seventy percent of the SDGs, so the Sustainable Development Goals can be monitored from Earth observation using the Paterra platform. And in particular relating to this particular conference of interest would be SDG 12, so responsible consumption and production, as well as SDG 14 life below water. We hope to be able to create and be part of a community that can make a difference in these SDGs by harnessing the power of Earth observation imagery and artificial intelligence together. And that's it from us. So thanks for listening. Thank you very much for sharing, Roger. I've got one small complaint. Pick Paterra should really be Pick Aqua. We does take me back to measuring turtle tracks on a beach, which we did with a camera set up that took pictures at six o'clock in the morning. Every morning we ended up with many, many pictures, many, many beaches, and I could see the same training mechanism helping us to do those counts that would give us the period for when turtles would be nesting and give us the peak period and see if that was changing with climate change and such like. So I could see very much that they would work for a number of solutions. My question to you is what's the flexibility of Paterra in offering the platform or the opportunities that you've developed yourself for users to trial their own applications? So this is a bit of a tricky question because we are stuck up, of course. So the answer to that is that it's always changing. If you asked me that question one month ago, it would have been easy for you to sign up for a free trial yourself on the platform. You don't have that taught to us. Just go and register and you would get a certain number of processing credits, as we call them, to train your own detector and then run them on your own imagery. As of a month ago, we changed that now to now you have to contact us and then we'll discuss with you. And then if you meet certain qualifications or your use case is something that we're willing to help you spend time on, then you can get access to the platform and we'll work with you there. And that's a bit because we started off in this very horizontal focus. And then we kind of realized as a startup that in order to survive, we had to narrow down our focus to more specific verticals. And so I kind of developed into this shift in type of plan. But it's still definitely possible for you for you guys to get involved with the platform itself, but you just have to contact us. And well, I'm going to be on the question here, but we do have some other researchers as well doing things like detecting beluga whales and also other animals, not snow averse, which I know are not aqua, but other environmentally related tasks. I can't hear you. Max, if you've got a question. And the kind of question. Yeah, I think the accessibility of the platform that you've developed is something which is really important for people who are working in maybe a more scientific domain, producing, you know, really cool algorithms for detecting species or, you know, in one specific domain, for example. But I think the work that you've done to create such an accessible interface, I mean, I've used it for counting species of fish on drying racks from photos from a mobile phone. So I mean, it's not just drones and satellites. It's a very flexible system. And I hope you continue to sort of explore new avenues with the development of the platform. And I think what you what you demonstrate is a totally fully functioning system that's really accessible to people. Do you plan on on keeping heading in that direction? Do you think? Yeah, I mean, definitely the long term goal is still the whole democratization of of AI on all these sorts of imagery. So right now, this whole we call our solution based approach where they contact us. That's really a temporary thing because currently we're still in long story short, we're still pre series A. So we're trying to get funding. And so it's a whole battle with the investors and all that kind of thing, right, the startup drama, but definitely the long term goal still is to really democratize this platform for everyone's use for sure. And that's that's also part of the reason why we invested time into developing these like blocks and workflows and stuff as well. So we also know that people don't necessarily want to only use the textures that they can build on our platform. They sometimes want to be able to run their own machine learning models and take advantage of the architecture that we created and the scalability and the fact that they don't have to worry about having GPUs and stuff or not. And so that's part of the reason, too, is eventually we would like to open that up to people to be able to run their own work on the platform at scale and really produce some meaningful results. Anton, have you got any questions to add? I really need presentation and compliments and maybe a bit about if you go for very high resolution images, you probably have some cost aspects. And how do you deal with those? Yeah, so high resolution imagery. Well, so satellite, yeah, they're super expensive. That's one of the battles we have to fight is that people look at Google Maps and they're like, oh, yeah, we get that for free, right? And it's super cheap stuff. No, not at all. In fact, satellite imagery is ridiculously expensive sometimes for like a few square kilometers or maybe like we want to take a 30 centimeter resolution image of, I don't know, New York or something, it's going to cost you like $100. So just for that one image, which we know is a huge hurdle. We try to work with people to work with lower resolutions as low as possible. So we have some research projects in process of trying to get like 30 centimeter performance at 50 centimeter imagery using techniques as a resolution, but that's more not our research domains. And of course, also there's we're trying to establish a larger drone provider network to help people get access to that sort of imagery. But you're definitely right that the cost of imagery is is a major hurdle. And a lot of our current clients, they come with their own imagery. But this is also one of our strategic best for the future is that as Earth observation imagery becomes more accessible, the costs of that imagery will go down over time as well. And that's so that's sort of the two main bits of the terror is that cost of imagery and accessibility go down. Meanwhile, AI technical capability is go up. And in five years or something, the merger of those two things will create a platform that's usable by everyone. Thank you very much. That's an interesting talk. Thanks, Roy. So again, Max, sorry. I was just thanking Roger. Oh, thanks. Excellent presentation.