 Awesome. Well, my name is Ishaan, and I'll be talking about a product that we're building here in CryptoEconLab called Atlas.storage. And Atlas is we're building a decentralized geospatial data platform. So the agenda for today will be, we'll talk a little bit about the vision, the roadmap, and why we think this is an important product to build. And then I will go into the market opportunity, and specifically into the Filecoin economics and the geospatial market. So I think my slides might be out of date here, so we'll see if I had a missing slide or not. So the vision for Atlas is to build a platform that stores all of humanity's geospatial information on the Filecoin network and make it available via APIs and SDKs, and integrates and shares other organizations' geospatial data to the broader community. And so the second part to Atlas's vision is to build a community that advances geospatial data sets and Web3 native software and technologies. So what does that actually mean, right? So I think there's like a unique opportunity here to make geospatial data more accessible and usable to researchers, scientists, engineers, and entrepreneurs. So right now geospatial data sets are really highly segmented in different parts. If you want geospatial data from the U.S. government, you can find a couple on data.gov, but they're all over the place and not easy to use. The APIs are not easy to use, and we want to provide an end-to-end service that supports storage and computation with smart contracts to unlock use cases that didn't exist before. So what that really means is you can think of an example of where you're like a homeowner and the home gets destroyed by a hurricane or natural disaster, right? And an example of where somebody could use Atlas as a property insurance company that has a policy on that home, you can get geospatial data such as satellite imagery or natural disaster data, or it may be run some computation on it, so in this case it could be like a machine learning model, and then tied that, the output of that computation to a smart contract, and then like fulfill a transaction based on the output, so then like in seconds you could say, well, it looks like three quarters of your house is destroyed, and so we're going to pay like X amount of the claim, right? And this is like a use case that could be done in seconds or minutes right after the data is available on that property in this example, where in like most of these use cases where people are using geospatial data to make a lot of like real life decisions, generally these decisions happen in like weeks or months and not in like minutes or days, right? And these kinds of examples, especially with natural disasters, like days is what homeowners or people who have been like kicked out of their house because of a natural disaster really need. The second part to like our vision is like a community driven approach where the community is passionate about finding useful geospatial data sets, uploading them on Falkoi Network and making it accessible, so like we don't want to be the only people, like our team doesn't want to be the only group of people who are storing geospatial data on the Falkoi Network and doing all the work we want. The community get involved, community members helping other community members, and you know that gets just like a better flywheel for like getting more data onto the Falkoi Network and then the community can decide what they think is useful as opposed to like us as a team deciding what we think is useful. So this is actually a picture of Landsat 9, which is a satellite that's in space right now by the US government, by NASA and USGS. So Landsat is a program that was started in 1972 and has been capturing pictures of the Earth for the last 50 years and started at Landsat 1. We're at Landsat 9 now and the Landsat program in aggregate has collected about 5 to 7 petabytes of imagery and metadata. So like this is a great use case to get this large of a geospatial data set onto the Falkoi Network and so we're going to start with storing satellite imagery and making those images and scenes that are available in Landsat available through retrieval APIs. And then after that we're going to go to other satellite imagery, low flying aircraft imagery, et cetera, and then also other types of geospatial data. We can imagine like climate, water temperature, land temperature, land classification. These are all different types of geospatial data sets that exist that are really useful to different types of people depending on what the use case is. So like an example is land classification is really important for like the agriculture industry and insurance and agriculture and like figuring out like where what crops grow where well. And then finally we want to partner with existing geospatial data companies to integrate their data into Atlas. So like an example of this there are a lot of companies that are already buying these, you know, buying or downloading these geospatial data sets and then creating proprietary data on top of it. So an example of this could be there are firms that are like building analysis of properties all throughout the US or Western Europe and working with them where they might be like a Web 2 company but want to get into Web 3 business model or have a distribution channel that didn't exist before of their B2B company and get them onto the Atlas platform so that those, that data can be accessible to like, you know, the growing Web 3 community that wants to build really cool technology or apps. And then finally one goal that we have is like we're big, you know, obviously being part of our call labs. We are big on open source and open standard so like all of this is going to be done in an open source way. We're going to be following open standards such as stack or cloud optimized geo tips if anyone here is a geospatial nerd and knows what those words mean. So why do we think that like Atlas is a good idea and we want to store geospatial data on the decentralized Web? So one is that geospatial data is kind of core to what we do. There's lots of use cases and insurance, property, construction, lending, agriculture where this kind of data is used every single day. And because of that there's like hundreds of petabytes of geospatial data that, you know, that exists out there and bringing that kind of data to the Falkland network has a lot of like powerful network effects. Second, there's like a lot of like physical and virtual worlds that I've been integrating recently so you can think about like metaverses or like play to earn games and even, you know, other games where something might happen in the physical world and then there's like something happens in the virtual world based on that action happening in the physical world or vice versa. And so we're able to kind of support those kinds of, you know, products that people build. And really like Atlas's mission really aligns well with like Filecoin's mission. So like if we, you know, we've seen, a lot of people probably seen this side a couple times by now, but the mission of Filecoin is to create a decentralized, efficient and robust foundation for humanities information. And for us we want to, we are very aligned with that because for us it's kind of the same thing, but for geospatial data and then making that geospatial data accessible to everyone. Finally, the economics of the Filecoin network really make it a lot better to store geospatial data as opposed to like storing it on centralized cloud computing platforms. And I'll get into that later, but like the Filecoin economics is like a huge boon to geospatial datasets. So I'm going to talk a little bit about like the market opportunity here and you know why Filecoin is great for geospatial datasets and overall the geospatial market. So like the market size for satellite and aerial imagery is estimated to be $30 billion by 2030 and that's just image capture. So like when I say image capture I mean like actual satellites or aircrafts or drones in the air taking images and people reselling those images. That doesn't include the analytics that people do on top of that and that market size is estimated to be $110 billion by 2026. So you can tell that like imagery looks like over the next decade or two is going to become a commodity and the real value will be in like the analytics that people can get from imagery and also like structured non-image data. Also like at current prices commercial satellite imagery can cost $20 a square kilometer on centralized cloud computing platforms and for you know in context just the US land area would cost $180 million USD per year to store and the world land area would cost $3 billion. So with Filecoin you can drop these cost by orders of magnitude and that drives real value because if you think about like storing data right, if someone has a really high cost of storing data they need to like monetize that data and if they have to monetize that data that means that they have to charge some price or they can like let's just say if you want to break even on storage you're not even trying to make a profit on this. If you are able to you know drop the cost by orders of magnitude you're able to decrease the price significantly to your users or your customers of that data which means that a lot more people can now access that data and that this is where like Filecoin, the Filecoin network really shines for really big data sets because right now like really big data sets cost a lot of money to store those on you know AWS or GCP or you know whatever. So like a quick recap on this right, I think like Filecoin is great because it has all this capacity right we have as of May 22 which is about five months four months ago we had 17 exabytes of storage capacity and there's like hundreds if not you know exabytes of geospatial data that is available to store and like the combination of storage, computation and smart contracts is where like we're going to be driving the real value over time because like you know storage is, Filecoin is great, storage is a commodity but like as Juan said yesterday's computation and smart contracts is going to be where the real value is generated. So we need a lot of help building this so if you're interested in Atlas at all if you're a researcher, entrepreneur or engineer who's interested in using our products you know please contact us the URL or Twitter and if you're in Filecoin Slack the hashtag Atlas Slack channel please contact us there if you're a storage provider who wants to store geospatial data you know this could be a great use case for large storage and retrievals or if you're you know someone who's super passionate about geospatial data and wants to join the team and help us build you know this future that we're envisioning that please reach out we definitely need all the help that we can get. Cool, thank you and if you have time I can answer some questions. Yeah, a brief question. Have you thought about the go-to market for this so how are you thinking about attracting people that have data to store to your product like what is going to be kind of the public facing you know function at your... Yeah yeah great question so our first goal is to like prove that you know this works and we can do retrievals from APIs like relatively fast with satellite imagery so there is Landsat which is U.S. you know NASA and USGS and then there's European Space Agency which has satellites that have been flying since 2014 or something like that and once we prove that our goal is to build a community that people can then you know either vote on in some mechanism and decide what geospatial data sets they want to store on Atlas and so that that's like one path of our go-to-market strategy and the other path is to work with like current geospatial providers who are like in the Web 2 business and they might want like a new there's like a new distribution channel for customers get them on to Atlas platform and like kind of connect to their APIs directly without having them to store data on IPFS or Filecoin and get people using that using those products. Did I answer your question? Do you plan to have a governance token like Ocean Protocol does? Great question. We don't know. So before we get into stuff like how the organization of the data will be and the organization of the entity we're going to be focused on like more use cases and see and let the use cases drive whether we're going to you know we would use a token or we would you know maybe go through it you know maybe use Filecoin or completely non-token method. Also is this data also given or like served to oracles to get it on chain? Sorry, serve to what? Oracles. So I don't think we'll probably store the data on chain but I do think that we will end up integrating with other oracles if they want to get data for their use case outside of Atlas. Because you mentioned about like Metaverse projects or gaming projects which like react to certain like real-world geospatial data. I think you said that a few moments ago. Yeah, yeah. So if that game is particularly on chain so you would require that data like whatever you have in Atlas.storage to get it like on chain and then it kind of like react to a smart contract and like the NFT if you have a land based NFT it is like changing its maybe its appearance like in accordance with a real-world data. So for that purpose like I asked the question. You know it's a good question it's just that we're so early that it's like hard to know what like the use case will be right. So like that is a possible use case in the future but for us for now we're at more simpler use cases with like more simpler applications. So I'm trying to think like the function underneath the hood if you've got the I'm just wondering about the cold storage versus like caching and ready to rock files is the idea that you've got like the whole surface of the earth or the whole surface of the US and then like you're storing that and like oh everybody's asking for New York City and everybody's asking for this new construction neighborhood outside of Chicago and so we're going to cash those but the actual you know economies and cost savings that you're getting are for the cold storage part right. Is that how it's working in Filecoin? The cold storage part is where you're getting the cost savings and then you would still the tools are still kind of conventional Web 2 tools for the caching and serving of stuff. Yeah exactly so like that's exactly how we're thinking about it but also in like let's say non-critical business applications like they're able to like run these big processes or jobs that like they'll leave the computer on for like six hours later and they'll have it run so like being able to pull from cold storage is actually at the beginning an okay use case. So like we always think about when we think about like applications we always think about like retrieval times and like milliseconds or sub one second right but there are a lot of applications where there's a lot of analysis happening where you don't need sub one second and so like that would be one great application of this but also like we do plan on like having like using as you know caching layer to make the satellite images or the geospatial data that is being queried the most available through that for really fast retrievals. Nice. Yeah. Well first thanks for the presentation. I think it's a very useful product. It's definitely one that we need and I know that because I've been on the terms of integrating when we are transforming the data because a common use case is you have to hand data. Sometimes you actually have more than one hundred set you have for example four and you want to compute a new one for like for example for doing future extrapolations you need to apply operate operations on each bit that you have. So not only you consume a lot of data but you generate a lot of data and it's usually I mean the closer you can get to the edge to compute and generate that usually the better. How do we I mean do we have any thoughts about that? Like maybe Atlas could be integrated with FVM or some kind of compute solution. Did we have any thoughts about that? Yeah so I don't think we're going to decide like right now what like compute solution we're going to use but like we you know just in terms of like compute teams that like working are working in protocol labs right like having integrations with FVM and some you know like compute over a data team such as Bakiow is definitely on the roadmap because and this is a good point because the like the compute jobs are actually more valuable than the actual data itself like and mostly because like geospatial imagery is going to is a commodity or will be a commodity very quickly and so like for us at least in my opinion it's really key that we do build out these integrations and I don't think that it's just going to be like one compute platform and I'm hoping that they'll be integrating with multiple compute platforms for you know because a lot of these compute platforms have their own use cases that are like you know depending on what they want to achieve in terms of performance or security or whatever it may be are very different from each other. And second question, it's very useful to let's say the data sets that have the most valid add are actually post-processed data sets like as I said before you take five hand data sets and you compute a new variable of interest like one common for example is about climate analysis where you look at the past, you run the model to the past and you generate this synthetic data. Is it the scope of the project that for example we are able also to include those synthetic data sets or those value-added data sets beyond for example RAM data sets? Sorry I didn't understand your question. Like suppose that I'm a user, I've extracted data sets I have computed that and I did generate a new data set with the same granularity that has way more value because of that, because of the computation that's embedded. Is the platform going to make it easy for me as a Mito user to for example publish these data sets that others can use? Yes, so that is definitely on the roadmap and that's a technical challenge that we're going to have to solve because so say like you've done some computation and added more data or created more data on top of the data set that's already existing. I don't know as of right now how trivial it would be to get that data onto like an IPFS node or the Faqoi network without you running a node yourself which I don't expect anyone to really do. We kind of want to abstract that away. So that's definitely on the roadmap but that's like a very future far-looking thing that we're going to eventually get to.