 And with that, I will turn it over to Gerald Bowden at NASA headquarters. Okay, excellent. First, Sandchat, can you hear me? Excellent. Next slide, please. First of all, I want to thank everybody for joining our town hall meeting today. I'll say we're kind of in an excited time right now with regard to, I'll say, geodetic imaging space-based SAR. We are working right now with NYSAR. NYSAR has an anticipated launch right now of January of 2024. So we'll just say almost a year and a half away from the launch of NYSAR, which is providing this wealth of data to go ahead and help serve all the different science and applications communities that will be able to take advantage of, I'll just say, synthetic aperture radar data. And we are now with the SDC, surface deformation and change study. NASA is now committed to have a mission that will follow NYSAR. We're not necessarily calling it a NYSAR follow-on mission, because we're not required to go ahead and just kind of maintain what NYSAR has, but yet NYSAR is also a very good platform to work with. So I do believe that we're actually, we're not going to call it golden age of SAR, but yet we've had a lot of really good data that will be coming online in the coming years and over the decade. And it's great to actually have NASA really focus on producing these ongoing data streams. And so what we're looking at now is came out of what's up on the screen here. This is the 2017 Decadal Survey. NASA gets its guidelines for how we launch our missions through the National Academy of Science Decadal Survey. And it comes out, yes, every 10 years. And it provides a roadmap for NASA's ongoing missions. And so there were five different, it's called designated observable. So hit next piece, please. Next piece. So there are five designated observables. It's kind of see on the outside. One's got cloud convection precipitation, aerosols, mass change, surface deformation and change. So if you kind of look at it, we're looking at the air, what's in the air, how water is moving around the planet, the most part with the mass change, how the surface is moving, and then the life on the planet with SBG. So next piece. And so each one of these missions are going to be, oh, can you back up? There's supposed to be an animation there that looks like maybe it didn't go through. So without the animation, can you move to the center of the slide and see if things can advance or not? Okay, we missed out. Okay, go ahead and back up. Let me just kind of finish placing things in context here. So with regard to this, we're grouping all of these particular satellites in something called the Earth System Observatory, or ESO is what you'll start hearing us. And so what we're going from is having a single mission to kind of look at a common phenomenology to actually have multiple missions all looking at the dynamic Earth. So if we pick on volcanoes, volcanoes got land surface deformation, you've got mass change, got ash that goes up in the air, it impacts weather. So all the different missions within ESO, the broad observatory can help us better understand the dynamics of volcanoes, climate change, and a lot of different processes. So next slide. So with ESO, what Vanessa's decided to do is we're going to go ahead and have actually nice our kick the whole EarthSurf Observatory off. This is a mission that came from the last Decadal Survey in 2007. And so NYSAR will be the first mission that will launch out of ESO, and then SDC will bookend it at the other end. So what does it mean for the genetic hematune community that works anticipating, I'll say a steady flow of data, both in the beginning as throughout and at the end of I'll say the life time of ESO. So this is a great opportunity for us to go ahead and be able to study all the different processes. SDC will, the SDC, surface deformation change study, is going to consider the entire program of records. So what we're looking at, not only what SDC has to offer, but also the data, say from the European Space Agency, Sentinel-1, we're just looking at the whole international constellation. We're also evaluating what commercial data can be used to go ahead and either help advance the science identified in the Decadal Survey or to help cover specific parts of the overall science and applications that we plan to advance with SDC. We do have budget caps within these missions, and so what we're anticipating is that we'll actually have to have international partnership to fully realize the vision that was identified in the 2017 Decadal Survey. Next slide, please. So the actual architecture study was proposed back at 2017-2018. The Decadal Survey identified an architecture that would actually really look at the geodetic imaging, the interferograms, looking at the deforming Earth's surface that you are able to go ahead and get through in the phase phase. So this is the geodetic part of radar. NASA added, headquarters myself, added onto the study to make sure that we're not overlooking a lot of science opportunity that comes out of our backscatter. And so both these pieces are being looked at, but they're also being put kept separately throughout the study. We'll learn a lot more of that later on this morning. And so we're also looking at basically how to best engage industry on how to not only for the data piece, but also with hardware with what might be added in with architecture. We're also looking at what are some new technologies that we're able to do. The vision behind all these Decadal Survey or the ESO missions is that the objective is not to kind of just be a follow-on mission. We're trying to see if there's disruptive technology, if there's other capabilities that we're able to go ahead and add to, I'll say, the overall NASA fleet. Next slide. So here's kind of the notional timeline that we have with SDC. So the study began FY29, October of 2019. We could see where we actually had kind of, we had workshops, brainstorming sessions, and we ended up coming up with a large list of potential architectures. Then we began evaluating how each one of the different architectures would actually work, go ahead and meet a lot of the different science and applications. And then we came down to a point where, you come down to a point where we're going to go ahead and take a subset of all these different architectures and study the ones that will most likely have the scientific and application benefits. Go through a study, a more detailed study, both from a costing architecture piece and what science is able to contribute to. And then we get to a point where we narrow these down to three in which we do a deeper dive on each one of these until we actually have a final architecture. Next please. But what I mentioned with the ESO piece, where NISAR is a trailblazer for SDC, we actually need to have lessons learned from NISAR. So next please. So we're going to see, here's a window period now that was not initially part of the study that we want to take advantage of, what are we learning from NISAR? What can we take from this incredible flagship mission that NISAR is, and how can we fold these pieces into the SDC architecture and so forth. So given that NISAR was delayed due to COVID, all things COVID, working with our partners with India, everything ended up sliding off to the right. Next slide. So these architecture studies we're going to do kind of a deep dive, everything is actually kind of slid over to the right. If I had done the slide properly, these three long arrows would have started further over to the right and the magenta area would have also slid off to the right. So at some point, most likely mid FY 2025 will be working on trying to narrow these down. This could also slide off to the right, but this is where we ultimately end up choosing what architecture that SDC would be coming, will come to. The SDC launch window right now is probably no earlier than 2020, 2030, and it may be closer to 2031. This is all based on the budget profile of the different ESL missions because you got AACP, Mass Change and SBG that are all in front of us. And we also have this time period we're looking on waiting for lessons learned from NISAR. Next please. So where are we today? This area where we've got all the different candidate architectures. And now working on taking it from this club where we had a whole stack of different architectures, 40, 50 someone architectures were narrowed down to about 12 to go and work on moving forward with basically doing more of a detailed study. So the rest of the town hall was going to actually be going through what's the process that we went through with going from the broad stack of satellite sand to the ballpark at 12 or so to go ahead and study in more detail. And I'm going to hand you off to the study team, because they're the ones that have I'll say rolled up their sleeves spent countless hours working on different scenarios and truly understanding both the engineering what's possible from the technology that we have today and what's technology we're anticipating in the near future as well as what are some of the different science opportunities that we have. So with that I'll hand you off to Steve, I believe, or who's next. Yes, I'm next. Thank you, Gerald. Okay. I am going to go over just a quick introduction. I'm Steve Horst. I am leading the phase to down select piece of the architecture study here and I'm going to just briefly go over a review of how we came up with the architectures that we are considering. So if you go to the next slide, please. So essentially, when we were going through these architectures, we were looking at identifying different capabilities based on the goals in the SATM that we might target and so we identified a number of capabilities that were positive for for what we were looking to do. And number one amongst those being continuity, this being particularly data continuity with pieces of the program of records such as nice are. We've heard that from the community in the past that is a very strong desire. You want to be able to take your data, you know, beyond the three year plans of missions like nice are and create longer time series and so that's been a key piece of what we've been looking at. We are also considering, you know, global repeat times, you know, nice are is looking at 12 day repeats. We are looking at improving that globally, if possible, and so that's around all land ice and coastal regions. When we consider things to be, you know, global for for this measurement. We've also been looking at increasing local repeat times. So there are, you know, special events or urgent needs where we need data faster. And, you know, could perhaps sacrifice the global synoptic background type measurements for an urgent need where we want to gather data locally at a faster rate. And so that's another capability that we were looking to weigh in our architecture trade space. We're also looking at atmospheric error reduction, namely tropospheric error, the water content there is one of our largest air sources for the measurements that we're making. We are currently reducing that error through averaging but if we had a way to estimate it with our measurement, we would be able to, you know, remove it and effectively increase the time density of our effective measurements that way. We've also been looking at, you know, increasing the look diversity. So with a mission like nice are we are operating in a sun synchronous orbit. This is highly inclined. And so the, the diversity between our ascending and descending look directions doesn't provide a lot of resolution for deformation in the east west direction. So if we change those look angles, increase the number of look angles that we get, we will get a better estimate of all 3 spatial dimensions of the deformation vector to provide 3D deformation, which can enable different types of science that we've not seen to date. We've also been considering synergy with the surface surface topography and vegetation observable, which is another observable from the decadal survey. It has very similar goals and one of their possible measurement techniques is via radar tomography. And so in that instance, the SDC instrument and the STV instrument would end up looking very similar. And so we're looking at ways of achieving both of those observable goals with a single system. The 2 below that highlighted in green are. I guess you can consider them descope options. Gerald mentioned the backscatter piece. You know, we with the geodetic measurement, we could dial back on the signal to noise ratio of the instrument and not produce. Backscatter imagery that would be considered useful for many of those types of uses. However, we could still get the interferometry piece as defined in the original decadal survey. So that is one option we're considering it could be applied across different architectures. The other place we can dial back is in spatial coverage. So we're looking at, you know, all land, ice and coastal regions. But, you know, change may not happen across the globe in quite the same priority order. So we could dial back on the spatial coverage that we get in order to specifically target certain areas over others. So those are ways we can dial back. I don't know if Gerald had mentioned yet the guidance we had gotten from the decadal survey was around 500 million dollars from NASA from phase a through F. And so, you know, looking at at ways to to meet the goals within the cost guidance given is something else we are considering. So if you go to the next slide. So the types of architectures that we are looking at when considering those capabilities, the first of those being a what we've described as a flagship type architecture. And so when you're operating in your repeat ground track orbit. The distance between adjacent ground tracks makes up the swath that you need to cover. And so under this scenario, you know, a single spacecraft would span the entire space between adjacent ground tracks. And then adding additional instruments would simply add to the time series globally, because it would also cover the entire space between ground tracks. So this is sort of the standard paradigm for what we've seen with international SAR constellations to date. And I saw we'll do this as a single instrument. The Issa Roselle proposed mission would also do similar coverage techniques. And so this would, you know, give us good continuity with past measurements it would give us great global repeat times. The drawbacks are that it is fairly expensive. And it doesn't provide us any of the other, you know, atmospheric error reduction or look diversity things that we were talking about previously. So this is one piece if you go to the next slide. Another architecture we've been considering is taking that distance between the adjacent ground tracks and dividing it up into equal sub swaths. And then giving a number of smaller spacecraft around a particular orbit. One of those sub swaths. And so since those are in the same ground repeat track orbit, if this were a 12 day repeat cycle. Each of those satellites would have the the same would pass over the same spot on the ground. Two days later, since this is divided into six here 12 divided by six gives you a two day repeat. And so when pointed at each of these sub swaths across the ground track, it would give you coverage similar to what we are currently getting with nice are, however, under an urgent response or special targeted measurement scenario. We would repoint those spacecraft to all cover the same sub swath on the ground and it would give us two day repeat times over that particular spot. So this would be, you know, 12 day coverage globally or two day coverage over a particular sub swath. And that would increase the, you know, potential for faster repeat times without greatly increasing the cost of the mission. If you go to the next slide, the next architecture concept we've been considering is what we've been calling, you know, multiple squint formation. This also uses smaller satellites with a primary SAR system that is pointed at the zero Doppler angle as is traditional with two co flyers flying forward and aft of the mother ship in the center. It would be squinted to the same spot on the ground and those additional look directions would give you the, you know, additional paths through the troposphere from the same spot on the ground. So from having those common backscatter points and taking different paths through the atmosphere, you can remove the water content of the troposphere and just get to the deformation you are looking for by having them squinted this way. Also the increased look diversity will give you the three dimensional deformation estimates. As we were previously mentioning in terms of that capability as well. So it does have some drawbacks in that, you know, you are putting some of your resources towards a formation of three spacecraft here as opposed to densifying your time series of measurements. But it does provide unique additional capabilities as well. So you go to the next slide. The next architecture concept we are considering was lowering the inclination of the orbits. This would be to increase the revisit time over the non polar regions so in the middle latitudes. And so this would sacrifice measurements at the polar regions. This is a, you know, a key piece of our SATM so we would be making some sacrifices here. We would get that coverage through such things as, you know, partnerships or commercial data purchases. But the the SDC hardware contribution would be at a lower inclination orbit with the hopes of increasing that look diversity and getting some faster revisits over particular areas of the globe that are not at the high latitudes. In order to take advantage that way it would not have great continuity with our past measurements. And that is one of the big drawbacks of this approach. So if you go to the next slide. In terms of synergy with STV, you know, in their radar tomography measurement, we are looking at helical orbit formation architectures. This would be similar to something like the tandem X orbit with a number of smaller spacecraft. This would be, you know, give the ephemeris needed to make the tomography measurement when targeted. We would also be able to point in the same manner that we were using for the divided subswath architectures to provide a SDC type of measurement. And so here I think we would be looking at, you know, seasonal coverage changes between the the STV tomography versus the SDC interferometric measurement. And so we're looking at this architecture for that capability of providing synergy with that other observable requested by the decadal survey. And one more to go if you go to the next slide. We've also been looking at alternate techniques for estimating tropospheric water vapor. So these are primarily add-ons I guess you could call them to flagship style mission. So as opposed to doing the three smaller satellites for the multiple squint angles, we could also look at adding an instrument such as a differential absorption type radar that would, you know, shoot through the atmosphere at different wavelengths and estimate the water vapor in that manner. And so that would be a different instrument that would go along with the SAR on a flagship type of spacecraft. The other option is, you know, an active scanned array that is doing something sort of akin to the inverse of the top SAR mode where, you know, instead of looking in at a common point, it would look out back to where it was previously and ahead to where it is going to get the same sort of multiple squint angles with a time separation in between them that we believe could be removed in order to get the measurements that we are after. And so those are alternate ways of getting that additional look diversity through a more traditional, I guess we should say, larger spacecraft as opposed to the smaller spacecraft. So I believe that goes through a brief high level overview of, you know, what we were considering in terms of architectures. And at this point, Jordan, I don't recall if we were doing Q&A now or just going straight to the down selection. Thanks, Steve. Yeah, so far, whether it doesn't appear to be any clarification, so we'll just roll right into Chris Jones and the down selection process. All right, sounds great. I'll hand it over to you, Chris. All right. Thank you, Steve. So yes, as Gerald and Steve have described here, we have started with a number of architectures under consideration in the SDC study and have recently completed this first down selection to a smaller set to study in further detail. I want to take a few minutes to tell you about the approach the SDC study team has taken to doing the assessment and getting to the down select, and then we'll hear some about the actual results of that down select. So next slide, please. So as I say, in the course of the study here, what we've just finished is what we describe as phase two a where we began with about 40 architectures that we assessed from that development process earlier in the SDC study lifetime. And looked at the assessment along a number of different relative dimensions in order to enable us to then make a down select to a smaller number of architectures to study moving forward. That in turn will enable us to eventually make another down select down to the three that Gerald was showing, which after further study will eventually enable us to make a recommendation to NASA headquarters for what to move forward with when SDC moves towards implementation. Next slide, please. The approach we've taken in the SDC study is to develop and implement a value framework in order to characterize each of the different architectures along five components of value. Their science benefit their applications benefit their programmatic factors benefit, as well as their costs and their risks. And by assessing the architectures in each of these areas were able to gain insights into how the architectures can be responsive to the different needs of SDC from a science point of view from a programmatic point of view, how to potentially enable it to fit within the cost guidance that was received in the decadal So just to speak briefly about each of these parts and then I'm going to focus a bit on the science benefit side. In the case of science benefit we define a term called feasibility which describes how well in architecture or how likely it is to achieve the science performance targets for the individual observables that are described in the science and applications traceability matrix. Those feasibility assessments in conjunction with a relevance parameter that weights the utility of those observables towards the goals in the decadal survey. And their necessity with respect to the SATM together form the science benefit which can be aggregated at the level of the decadal survey goal, or at the level of the five focus areas that the SDC study team focused on. We also wanted to look at how well the architectures could respond to a number of different applications communities and potentially enabled applications. So we were able to do an assessment of their capabilities with respect to those and characterize the architectures in that additional dimension. Headquarters and study leadership work together to define a series of programmatic factors to capture other relevant things that would impact the decision of which architectures to move forward with and study further. And we also took advantage of some initial cost estimates around the architectures and some initial characterizations of candidate risks in order to have an understanding of where the architecture options lay in those two domains. Next slide please. So with respect to the science I want to walk through a little bit of the approach we took to do the science assessment because this informs what went into the down select that was completed recently and that you'll hear about in just a moment. The SATM that is used in SDC derives first from what was requested and desired in the decadal survey, but it also builds as Gerald said on what headquarters wanted to include. In particular the addition of an ecosystem's focus area on top of the focus areas already derived from the decadal survey. The SDC's research and applications team worked with focus groups around the community to further refine that SATM and to define measurement performance targets with respect to things like revisit rate and accuracy and global coverage for the observables in the SATM. Next slide. So those SATM targets essentially were what the architectures were graded against the architectures were evaluated using a performance tool that was developed by the SDC team in order to assess how well those architectures met those performance targets across all of the geophysical observables. Those together on the next slide show that we were able to use the assessments of the architecture capabilities and the requirements in the SATM to define these feasibility scores. Essentially, if an architecture could do better with respect to revisit time or accuracy or coverage or spatial resolution that individual component of feasibility increased and ultimately could if the architecture fully satisfied what was desired in the SATM reach a score of one. Together we could look at those feasibility scores for individual observables as well as aggregated together at the level of decadal survey goals at the level of science and applications objectives in the SATM or at the level of the focus areas themselves. And in doing those aggregations the last slide shows that we then did a science benefit that depended on both that feasibility that characterization of what the architecture could do. And the relevance the necessity of that observable to the science and applications goal and the importance of that observable to what was defined in the decadal survey. Those parameters came together and allowed for a quantitative assessment of the degree to which that architecture responded to each of the different science areas. Again, these scores were used to characterize the architectures and to inform them select. They were not the sole deciding factor in making those decisions. We go to the next slide. I can talk a little about how these were used in further detail. So what the value framework products were were tools in order to enable us to assess these architectures to compare them and to have productive discussions around them during the course of the down select. We had the ability for example to slice the architectures by their ability to respond to all of the observables in the cryosphere spoke this area and understand that several of the architectures had much lower performing scores those inclined ones that Steve just spoke about. We had the ability to do more relative comparisons across all of the focus areas, or to look at smaller subsets of architectures along dimensions such as the programmatic factors to understand the benefits and droplets. That's the approach we've taken in order to characterize the architectures and inform the down select. I'll pause here and see Jordan if we have any questions to answer before we move on to the results. Thank you for your interest. Do not see any questions so far. Just a reminder if you have any questions along during these presentations please input them in the chat. But again once the next presentation has wrapped up. If you raise your hand we will unmute you and let you ask your question. So yeah, so we'll go ahead and roll right into about to with the results. Right. Welcome everyone. So I would like to talk to you a little bit about the initial architecture down selection that we have done and what the results of that architecture selection is for the next set of slides. Pretty much a lot of the SDC team has contributed but the people that you have seen in the previous slide put together the slides. So I'd like to thank them especially. If you go into the next slide what you will see is the results right away. Essentially these are the 10 architectures in L band we have also selected architectures for frequency comparison in S band which you will see in later charts. But these are the main sets let's say of the 10 architectures that we have selected. And in this table you see a lot of details so let me walk you through a little bit. On the very first column that says architecture, we are essentially seeing the three letter or you know, short short hands designators for the architectures that we are using. The first letter indicates the frequency band so L is L band. The number in between is essentially the number of spacecraft in that architecture. By all the last letter designator is an incremental indicator that so that we can distinguish different suspecting architectures that have the same frequency of operation and also the same number of satellites. But if you were to look quickly at some of these in the second column you see the key characteristics of these architectures. So for example for the very first one L1C it is similar to a NYSAR L band instrument with a precipitation water vapor so instrument added on to remove the tropospheric contribution. While the L4A is a two NYSAR like systems together with the Rose L constellation so it has four L band systems essentially in that group. If you look at these two you will see the third column that says orbital phasing groups. That essentially goes back to the charts that Stephen was showing earlier about how we distribute these satellites around the orbit. L1C has single satellites so therefore there is only one satellite in the orbit. L4A is equally distributed to four groups across the orbital plane. Similarly if you look at L5A those are essentially subswats or distributed swats that where we have smaller satellites that each provide a smaller swat width as you can see in the swat width column. But they all add up to NYSAR and in actually L5A's case it's more than the NYSAR swat but it's again equally distributed around the orbits in five groups. One thing of course that you will see is that the polarization is different here that it's a dual pole compared to called pole capable. And the one thing I would like to highlight here is also the repeat period column. So the L1C is again just like NYSAR every 12 days. L4A is essentially four satellites on a 12 day orbit so that will give you about three day coverage. L5A is a split swat and therefore it will globally cover the consistent observation. You know globally all landmasses will be covered in say eight days. Because there are five satellites if we need to achieve rapid revisits due to an event over a small location we can actually do that in two days. So L5A has a specifically a 10 day orbital period. And you will see that designation of that short term revisits in front of the slash and a global coverage revisits after the slash in that repeat period days column. If we continue going down these architectures for example looking at the L6 these are the multi-squint architectures. So even though there are six satellites there in two groups of three satellites each. If you look at L8A that is somewhat unique that we have eight satellites that are flowing close together with about six hour difference. So in essence they're in one large group and they have also smaller swats but they add up to essentially more than NYSAR revisit in the repeat period in 12 days. Nine days is again a multi-squint and 9A L9A is a NYSAR multi-squint with co-flyers. L12B is the STV like or STV contributor of architecture let's say with the multi-baseline helical orbit which can give us Tomosar. And the L12C and L18A are essentially satellites or architectures with many satellites, small observations, small swath widths and distributed around the orbit so that we can get also fast revisit. The last column I would like to highlight here is the relative cost column. As you can see that depending on different features the costs can change quite significantly. And so you see the normalized cost to the L6C in this design. If we go on to the next slide I'll tell you a little bit more about these features in a shorter and more easy to understand maybe visually appealing way. In this graphic what I'm trying to highlight here and thanks Katya for putting these slides together is the revisit time versus the number of satellites and swath width information that you have seen in the previous slide. So you see the size of the circle changes depending on the swath width and you can see L1C has a large spot just like NYSAR and 12 days orbit. If you for example look at the very top line L18A it has a smaller swath width but it adds up to the global coverage, global land cover in about eight days. And then because it's 18 satellites in six groups we can actually achieve faster revisits if there are local events that we are looking at. If for example there's a volcano erupting and so on. And that is indicated with the orange dots that you can see in the two day mark. Again if you look at the maybe one of the takeaway messages here is that L8A is quite unique in the sense that it achieves NYSAR like global coverage but also achieves really fast 6R local revisit. So that might add to the tools that we have in our toolbox to do new science that we haven't been able to do so far. If we go on to the next slide you will see another version or another slice of these parameters that we were looking at. And here what we're looking at is the look diversity figure of merit on the y-axis versus the minimum repeat time of these architectures. Here we looked at many architectures and I'll show you in the next slide all of them. But we looked at architectures at L-band, S-band and C-band and you see the different frequency bands designated in different colors. Also the coverage rate in a sense the swath width and how much duty cycle we can have on those satellites provides you the marker diameter. And you see NYSAR on the bottom right hand side while the Sentinel-1 Rosel with co-flyers essentially can provide a 6-day repeat. And also the co-flyers allow us to do three-dimensional deformation as a result achieve high look diversity figure of merit. Potentially allowing us to do new science that requires 3D dimensions in that sense. And on the lower left hand side you see the subdivided swath architectures where they add up to the NYSAR global constellation rate. And also the Sentinel-1 with Rosel augmentation as the two larger circles around the 3-day mark. If we go on to the next slide you will see essentially all the numbers or the summary tables that we have generated. Believe me there were a lot more numbers that actually created these focus area numbers. But this breakdown essentially gives you a science discipline look up in the silos of the science disciplines. So we basically took out the geophysical observables and essentially average them in a clever way that Chris mentioned to you earlier to come up with these scores. And one thing that you can see here is that the hydrology seems to have always a yellow color sort of lower scores compared to the others. And the reason for that is essentially we surveyed the scientists and look at the decadal survey for our geophysical observable needs. And some of those needs were identified as higher than the other disciplines. So in this case essentially hydrology needs were skewing a little higher. But essentially what we were looking mainly was the relative difference between these scores and seeing how one architecture compares to the other. And the other thing that you probably are seeing are the L4C and L6F have low cryosphere scores and those are the inclined orbit architecture. So they don't reach to the global to the higher latitudes and therefore they have not enough cryosphere sampling in those areas. Maybe another thing to highlight here is also the L12C, L18A and L18B. They are counterparts on the S-band, the S12C, S18A and S18B where the ecosystem is scoring lower and that is because those are single polarization and low signal-to-noise ratio systems. As a result they don't so well in imaging in the image quality itself radiometrically and therefore they achieve lower scores. And the other distinguishing factor probably to highlight is the geohazards is that depending on the revisits and observation data rate capacity of the constellation of the architecture. Those scores tend to go up and down and you can see sort of the architectures like L12C have very high scores because there are many satellites and can acquire data in short revisit consistently. But something like L1A does actually relatively low. So if we go on to the next slide we will see actually the selected architectures of these from this table. And what I would like to highlight here is that you can see that we basically have the highest performers in all categories selected. And you can also see clearly that we have the S-band architectures where we will do frequency comparison studies with the L-band architectures. But in general for example the L12C and S12C have the similar constellation design. Therefore they achieve very similar scores. While the other thing to highlight probably here is that we removed the C-band architectures and also the inclined orbit architectures from our further analysis. If we go on to the next slide we will take you look at a more qualitative look at these numbers. These are coming from our research and applications team and Alahazandar had put this slide together. And what I would like you to focus on is essentially the first column on the left hand side where you can sort of see the key points of these groups. So you have the top overall performance like L4A which is essentially again two NICY-like satellites joined with Rosal which is also two satellite constellation. So those are flagship Codpole missions and there's many of them so they perform really well. The second column, the second row if you look at is the top performance in hydrology and ecosystems. Basically providing you really good radiometric accuracy and also in tropospheric correction but maybe slower sampling rate compared to the top overall performance. The top in geohazards you will see that those are essentially the architectures where there's many satellites up in space that are able to be steered towards an area of interest where we have the desire to do so. And the fourth row is average across all focus areas you know all science disciplines we can say. That's a mix of the NISARs with multi-squints and the other column, the other row is actually quite similar to that in the sense that these are variations of those but often have less satellites. So for example the N9A is essentially three groups of three satellites while L6D is essentially three satellites of two groups. So those things are changing and also the STV architectures are in there. Finally if you look at the bottom two you will see the incline orbits, the L4s that are poor in the cryosphere but still okay in the mid latitudes. And also due to the radiometric imaging, single polarization, low cost, low duty cycle and low radiometric accuracy, the L18A and so on. Achieve poor scores and are shown at the end at the bottom. So you can see again the architectures in different colors are selected in L-band and also the frequency study in S-band and also the architectures that are not down selected. So if we go on to the next slide, what I would like to show you is a higher level look of these again ten main architectures in qualitative ways in terms of the formation science perspective. So one thing that we see is that pretty much all L-band architectures provide good continuity especially with NISAR. One thing of course that you see is a yellow there is L12B which is using the helical orbits Tomosar type architecture. So therefore it's a new and unique system and maybe less of a continuity. In terms of the improved accuracy you see more variation across the board where L1C is essentially achieving similar to NISAR but will do better because of the water vapor correction system. And while L4A even though there are four of them it doesn't have that tropospheric correction so it does somewhat less. And L6C and L6E essentially with the multi-spin cofliers will do well in terms of tropospheric noise removal so it will achieve well. And L8A is in the sense of global imaging it will achieve the same as NISAR but in terms of local rapid analysis it will achieve more like L4A with NISAR and Roselle as it can image more rapidly. And in terms of if you look at the rapid repeat sampling again you see more of a variation where L12C and L18A will provide an improvement while the L4A and L5A will achieve a higher improvement as they have less satellites but they have larger coverage areas coverage rates and therefore they achieve better. If we go on to the next slide we will take a look at the same type of information but this time in terms of radiometric science you know radiometry related observations and not in terms of deformation. Again we have good continuity across the board except of course the L12C and L18A don't do polarimetry they are single pole observations and they also have you know lower duty cycle and radiometric accuracy. So they are poor performance in that sense but they provide you know if you look at the other observations again so in terms of improved accuracy you know two NISARS plus Roselle is better than a single NISAR in terms of you know accuracy but it's more in terms of temporal revisits less so in a single observation. Which will be possible with L8A or L12B where you will get faster revisits over your areas of interest. And if you again look at the rapid repeat sampling you know L4A globally do well in that sense and L8A will do really well in small areas compared to the speaking. If we go on to the next slide I would like to tell you a little bit more about the cost and how things look in that realm. So if you look at the relative cost versus the area collected by all channels this is essentially a data rate versus relative cost. You see this nice trend looking up which is no surprise right that you know the more data that we collect there's a price to pay for that and the more expensive things get. And you can see the selected architectures here are shown in green and we capture a good range of this cost versus data you know collected amount of data space. If you go on to the next slide you will see how cost and area looks you know how that figure looks different in terms of focus area scores. And right now I'm showing the delta scores you know the relative scores for each focus area. And so for example if you look at the cryosphere the first plot on the left you see that the ink lined orbits are doing not so well as I explained earlier. But you can see that same trend pretty much existing in the top portion of that plot. If you look at ecosystems this times the low performance are the single polarization local systems and you can sort of see a less of a trend in the in the other systems. And hydrology is more of a scatter but still you can see the trend. When we look at solid earth the ink lined orbits are not doing so well. But the other architectures again we have a good sampling of that space in terms of the solid earth scores versus the area. And when you look at the geohazards this is where you see that you know in terms of the collected area versus the delta scores. Things look a little differently but again we are spanning the whole spectrum of the geohazards scores including the high performance. Finally one last thing I would like to tell you in the next slide is about the programmatic aspect of these things. So in addition to all the science and application benefits and cost analysis we also had to look at the programmatic view of these architectures and how well they play into the programmatic interest such as leveraging international participation, leveraging other US agencies opportunities for leveraging commercial data, continuing with the program of record and also the enhancing science return. To be honest there was actually other criterias as well but these are sort of the groups that we can provide easily. And essentially what you see again here the darker or the more faded colors indicate the architectures that were not selected. And you see the architectures that are selected in brighter colors along with the groups that Stephen had mentioned earlier in the presentation. And you see again that we are selecting things in a good spectrum of things where there is good programmatic variation but also we select the high performance in terms of programmatic comparison as well. With that I see some answers in the chat so I'll maybe defer to Jordan. Unless there is a question that we would like to take live. Thanks. If we go to the next slide. I believe that concludes all of our presentations and so now we have roughly the next hour open for. As about to alluded what's starting to happen in the chat which is questions being asked interaction between our team and the community you all so if you have a question you can feel free to type it out. Or the option to raise your hand and one of us will unmute you and to do that to raise your hand you can hit the little smiley face. And then that'll bring up another window menu and they'll be a raise the hand option. So I'm going to turn it over to Gene real quick to see if we've answered all the questions in the chat or if we want to bring maybe Paul Rosen into provide some further clarification on that gene. Gene we can't hear you at the moment but my apologies. Maybe we can let Paul Rosen explain his answers in more detail to the two questions that he answered. I can do that. So the first question was related to La day and whether we should have an automatic. Disaster response sort of capability automated tasking for rapid events and my answer to that was. That we really designed this based on community feedback to try to get faster repeat for science targets. Primarily things that are varying on the scale of less than a day so that we could get some daily sampling at least at certain areas. But that said of course it would be great for certain things. Disaster response where you happen to be coming over this constellation comes over our gets a reasonably reasonable time. So I think I'm trying to think of the name of the mission I think was Hyperion which was a was a multi spectral or hyper spectral early mission had some experiments on automatic tasking and I know for planetary science they're looking at automatic tasking capabilities. So that people don't have to be in a loop with the time delays to the planets. Those technologies are being developed and I think we could easily incorporate those on a system like this. So the second question was from Gordon. I was asking about polar imagery and why it costs potentially more you know there may be some design methodologies that would mean that there is no difference between polar imagery and single pole but generally speaking most traditional designs have had two channels which means double the hardware related to receiving. And of course you have twice the data rate because you have two channels then and typically if you're interested in cross pole you need to have for certain applications better SNR which may imply more power. And the and things like that so there's there's generally speaking a number of factors that would cause a polar metric system to be higher costs and I think based on the nicer experience that's certainly the case. And I think traditionally cost models show based on past experience that that's the case. There may be ways to cleverly avoid that but that's at this point in the program where we don't have a specific design. We're going with cost models. Thanks. Well, can we go back to the review the Batu's first slide which showed all the architectures. Please. Thank you. Thank you. So Gordon reading your question I guess you were inquiring about L5A maybe is that the nice are you know split into five strips with dual band or dual polarization. I think we need to find you too. Yeah, hang on. I'm finding him in the system. I got him. Hey, hey, buddy. Yeah, I can speak now. Yeah, I was my question was it was sort of hard to keep track of as you went through you you selected mission selected the architectures and it was hard to relate the selected architectures. Okay, I get it now I missed that when I when I saw these are the selected architectures you started out with the selected architectures. Got it. Alright, so I missed that the first time around and that was that was the reason behind my question. So, ready. Got it. Yes. Yeah, sorry. Yeah, I wanted to start with this, you know, because I thought everybody is probably more interested in the selected architectures. But apologies if I, you know, didn't make it extra clear when I was trying to get going the slide. Yeah, got it. Thanks, buddy. Perfect. Do we have other questions? Jean, maybe do we have any questions that have come in to you as well. No, I wanted to invite the SDC team members to unmute and ask any additional questions or if they would like to provide additional comments to the questions that have appeared in the chat. Otherwise, we can proceed just to some general questions. Oh, sure. Can you review why no S and options made the cut. Yeah, so thank you. Yeah. Thank you. Thank you, Jeannie. So thanks for that question too. I think if we go back to the probably Allah's slide or the table before that, I think they will both help answer that question. Essentially, we selected the 10, which are shown here in red as the main architectures that we're studying. And also the things that are in orange are the designated as frequency studies. So, for example, if you look at L5A and S5A, they are the same architecture except the frequency difference. And therefore, you know, when we are doing the analysis, most of the analysis will be very similar in terms of coverage and so on for L5A. And we will do a delta analysis, a frequency study for the S band. But in that sense, you know, the S band architectures did make the cuts and we are looking into those variations of these architectures too. And yeah, I yeah, as Jeannie said, I guess we can have others also. So, yeah, ready to talk. Yeah, that's true. I was going to add in a little bit more with the S band and with the whole Earth's Reservatory. Once NISAR launches, we'll have dual L band and S band. So there's very little accessible S band data that's out there. And so that's one of the things that will be nice from NISAR, is the ability to go ahead and take a closer look at its strengths and weaknesses and how we'll actually fold into SDC overall. So that's one of the good things with basically having SDC with NISAR as a pathfinder for SDC. And I'll amplify Gerald's amplification in that, you know, the S band allowable frequency range for Earth observation is in the 9 centimeter wavelength range. C band, there's a lot of C band systems up there. They're in the 6 centimeter range. So that, you know, the question is, are they really that different? L band is substantially different from those 24 centimeters with all of the issues. And so the question will be, what's the cost benefit of going from L and S band? So we figured by picking the subset of the S bands to study in relation to the L bands, that would be a good way to sort of span the space of cost and science benefit relative to sort of the ongoing C band sort of international systems. So let's unmute Eric Grignot so that he can answer his question. Now let's take this a while to find people. He's good to go. Okay, great. Thank you. Eric, do you want to repeat your question so we can have a discussion? Yeah, sure. So can you hear me? Yes. LL8A is the only architecture with sub daily repeat. I'm particularly excited about sub daily repeat for the future. I would like to understand a little bit more for that particular architecture. It's scoring compared to other architecture and understand why it would not be on top, on top overall. Is it global coverage? Is it cost? For a lot of applications, sub daily repeat is opening new areas of science. You know, water cycle and vegetation, rapid breakup of ice, a disaster on the curse. I would like to understand that if you could comment on that. Yeah, go ahead. I was going to take a first cut out of it and then people can either correct me or amplify it. I think we need to be a little bit. We need to. How can I put this? L8A was a relatively late addition to our architecture study and it came from community feedback that people like you Eric that would like to have. Sub daily sampling. The Decadal survey itself. Did have some of its observables in that category, but you know, the bar was set. In the Decadal survey for something like NYSAR type. Continuity plus whatever could be added to it and given the fact that NASA's. NASA's guidance for cost is so low. Perhaps the team originally was not ambitious enough. So we were convinced that we should add it and the scoring was done. I'd say in a somewhat. Ad hoc fashion for L8A. So I'll just admit that up front. That said, we believe, you know, it's in the category of these large number of satellites, each of which need to be lower cost in order to be able to have a hope of doing it. And that does mean that you end up trading global coverage for. Local observations. I think that would be reflected in the in the stoplight chart on on the. Page that you see the. Go to the stoplight chart that we added sort of towards the end of the presentation. Slight 30 after all this chart. Yeah, so L8A. It does. In the rapid repeat sampling case, it does very well. Of course, it's the only one that's green there. But that's the split between. The split in that column of rapid repeat sampling reflects the global versus local type of sampling. So it doesn't do so well for global sampling. If you're targeting. Yeah. So that's that's my response. I don't know if anybody else would like to amplify further. I just want to add that. Yeah, like Paul said, we haven't run the performance tool, the whole performance tool yet on it because it came, it came late, but we have the mission plan. In terms of how it's doing the observations, it's globally everywhere. But so far we just put the title, like the coastal regions as targeted areas that we're going to observe every. 6 hours, but we have the capability to define other targets. Like, like you said, for example, able to have a transpiration for vegetated areas to just observe as much as we want. But so far it's just globally like any other 1, but just every 6 hours for the coastal region. Does someone want to address the role of the commercial sector for sub daily repeat and how we're as a team evaluating commercial star capabilities? Maybe that's a good question for Batu. Sure, Jeannie. Thank you. So, essentially, for commercial star capability, we are also running a parallel study. And just like these science benefits scores, we have application benefits scores from the commercial sector, you know, looking at their characteristics as for the whole entire sector as a whole. And of course, at rapid revisits, you know, whenever there's something ongoing, that is something that the commercial sector can do very well. Over, you know, smaller areas of interest, but daily repeat or even faster. The, the commercial sector capability tends to be at the X band at the moment. And, you know, whenever we're looking at the formation that tends to impact us when there is. You know, stronger tropospheric contribution and also stronger signal loss due to vegetation when you are looking at the surface deformation again. So, and those scores as a whole, though, are we're using to identify which your physical observables fits well for the commercial capabilities and, you know, how they could be used further. To essentially improve SDC goals. And so that is definitely a major aspect of SDC study and we're working on that as well. So maybe Gerald could say something about headquarters policy on the purchase of commercial SAR data. NASA has radio commercial cells that a data acquisition program CSDA, and I'll go ahead and put a link in the chat when I'm down here. So, so we actually have a program we're evaluating different commercial data sets for I'll just say overall NASA science as well as US government and in some cases, a little bit more broader distribution. So as this relates to SDC, the we are looking at and everything is on the table for SDC from being able to go and purchase a commercial data, go and fill, say specific data gaps. So say if we went to one of the lower latitude or orbits, can we go ahead and fill in some of the different scientific needs from say the cryosphere with some of the commercial sector. So that is a potential that's on the table. Also with the commercial sector we're looking at are there specific hardware elements that could actually be purchased can be worked with. Can we take advantage of basically what we're doing with SDC as having kind of a technology, I'll just say transfer to the commercial sector. So that is all on the table as it is right now. But when we're looking at these different architectures where if we have specific either observational gaps kind of temporal frequency, or I just say, like I said, with kind of the lower latitude systems, then we can go and work with filling in with commercial data. So if there are go ahead. Thank you, Jeannie. One more thing I wanted to add was that I spoke mostly about the augmentation of an SDC architecture, but of course we released a request for information about the commercial capability for SDC. And that as Gerald mentioned, it's also still on the table and they'll be further, you know, we're still looking at that more. And Steven, I think informed the vendors that responded to that RFI recently about where we might have additional updates. I don't know if Steven or others would like to comment more on that. If not, if we could unmute Sylvon and Christian to ask related questions. I've got one last thing kind of in the commercial piece. This all fit right down the decadal survey has identified 500 million for this for SDC. And so as we start looking at each of the different architectures right now we are objective. We have to kind of fit within the architecture are we within this targeted budget. And so at one time we got say L 18 a the likelihood of being able to go ahead and fit 18 satellites or got that number right within our overall budget right now is pretty low. So we are evaluating the different science pieces, but the costing part will actually come through near the end. And that will be part of the discussion. The other missions within ESO so the atmosphere. I'm sorry, I'm joking system, I believe. Mass change everything. They all have similar budget caps, which they're working with. And right now there's not been given a lot of extra flexibility for this mission, or for these other missions to go ahead and see their budget caps. This is where we're looking for partnership looking for different ways potentially other agencies that would be willing to go and partner with NASA in developing SDC. And you can see this also where we've got Roselle activities or looking at trying to partner with ESA on either a coordinated or launch or not coordinated mission that would fly with Roselle, or one that basically be NASA basically have our own mission that would be consistent with Roselle and then Jackson has right he so have theirs, and then all the data would be in our fairground with our interferable within them, but that'd be complete standalone missions where somebody says bring a rock. Basically, we bring a system and we plug it in, plug it in. And so that's also another way of going ahead and help building an international constellation, where you have each of the different a space agencies contributing something larger are continuing their pieces so that this sum of the parts is larger than what they are individually. And so that may be a way of moving forward with some of these go ahead and ultimately envision and actually realize the vision that came out of the decadal survey. So we got an update from Paul Rosen about the status of Roselle, or if there's someone from ESA, is there anything new in the last five or six months about how the prospects for a roselle mission are coming along. If not, we'll go on to comments by Christian and Sylvan. Maybe Marco Lavalle has some new insights I haven't heard they were waiting for some programmatic decision as to moving to the next phase. I think there was good confidence that they were but I haven't heard if they actually did that or not. Marco, you are on now. Yes, thanks. Yeah, that's correct. They had a good confidence, but we haven't heard anything new in the past few months we asked ESA for for an update but we haven't heard yet. It is likely they will hear more in 10 days from now at the leaving planet symposium. So that's, yeah. I think that's that's in time. So one of the questions was with regard to how would the presence of commercial alband SAR data as early as 2026 impact this program. And this is from Christian lens. If you, I've unmuted you, you should be able to elaborate on that comment. Maybe you weren't unmuted there. You should be muted now. Can you hear me? Yes. At the airport. Question is around what investments the private sector may want to make going forward independent or in connection with this program. So if, for example, a commercial. There was going to have data available that meets these requirements. What would that affect the planning around you now looking at the 2030 start date. Thank you. So later than the 2026 that you mentioned in your email, I mean, in your chat. Post. Yeah, I only got every, every fourth syllable from what Christian was saying. I don't know if that was true for everybody. I couldn't quite get the details, but I get the gist of the question and I can take a. Bracket a response, I think it really would depend on the nature of the. Commercial L bands capability, of course, the cost and all that kind of thing. As Gerald mentioned, there is a limit a cost cap limit and. For what can invest and I think any commercial data by. Associated with meeting SDC goals would. Effectively taken off the top of what NASA itself would build. That said, many of you on the call, I think, know that we issued a request for information to the commercial sector for range of. Possible commercial roles in SDC anywhere from contributing hardware or satellites to actually providing a full data service to NASA. We, as you all know, you're probably waiting for us to talk with you. We've been a little bit delayed partly because of the down selection process, but partly because we wanted to. Have engagement with NASA headquarters and this season has been rather busy for them at the higher level. For other purposes, so we're the plan is to look at those are fire responses interact with each of the vendors and look at what our possible follow on studies might be with them. Hopefully funded. To be able to really. Answer the question that you've posed in a more definitive way, it really will depend on the nature. Of the commercial capability to cost the coverage and whether it can really satisfy. The SDC goals or not, if you're, if, if what you were saying, but I couldn't make out is that. You're looking for guidance on what the commercial sector could build in order to meet those goals as a. You know, as a business model. I think the answer to that is that's what this RFI process was for. Oh, and others on the team. So, yeah. Go ahead. Yeah, I just wanted to add that I fully agree with Paul that this RFI process is really the key engagement and essentially, you know, continued engagement through our commercial sector representatives on their potential or expected capabilities in the future. Really greatly help us in terms of understanding where the commercial sector is going and. You know, how we can benefit most out of that. So, just want to amplify that. Thank you. Oh, yes. And this is Steve or I also wanted to amplify that it was well put Paul. So we do have a raised hand by Ramesh. I don't have him found him yet. If he's been unmuted, please. Go ahead with your question. Yes. I'd like to know them about how we are going to validate the water vapor product from nicer. Are we going to use GPS water vapor. The aeronet water vapor or other mode is water, water vapor. The problem is in the US you have a so dense network of GPS, but now you search for India. Only one permanent station in Bangalore. Okay. And whatever it's a very dynamic over India. So I would like to know. Thank you. So thanks for the question, Ramesh. I am not you said nice. All right. Not sdc. We don't have a SAR derived water vapor product that's coming out of nicer directly. We intend to use the ECMWF numerical weather model data. To provide a water vapor layer that we will include as a layer in each of the products. For information and potential correction purposes. And that how ECMWF gets validated, I guess is a question for them. We've been working with Indian colleagues on the nice our team to define. I know India has their own weather model. That I believe has been compared with ECMWF. So that's that's the kind of water vapor data we intend to provide with nicer. And it's validation would be done sort of not by the nice our team. Is that answer your question? Thank you. Thanks, Paul. Yes. I do agree with your answer. But the question is, I do not know much. Whether the Indian model water vapor is compared with the ECMWF. Okay, I published a paper in JGR, not very excited, where I compared the GPS water vapor derived and the aeronet and other stuff I did over India. Thank you. Yeah, if you're interested in further discussions, please contact me. Okay, we did. We had a water vapor working group or atmosphere correction working group and with the Indian participation and we have, I think a report that we could probably share with you about the relative benefits of the different weather models. That would be great because in 2001 when the Gujarat earthquake occurred, you see, and people, they did not get interferometric fringes because of the water vapor. And whatever column was very high after the after the earthquake. Yeah, interesting. Okay. Thank you. So if there are no more comments on this topic, I wanted to bring up an earlier topic brought up by Shadi belt. I was of interest to me when watching the presentation about the phase to be. Phase to a rightly focused on science with maybe some of the programmatic or costs factors lessened what overall will go into beyond additional calculations into the phase. To be stage. I don't know if that's Chris's first chart to please run the one that showed that timeline. And maybe Chris can, what would like to stick a stab unless he's occupied at the moment. Yes. So yeah, I think I'll, I'll speak a bit about some of the things we have defined and then I think there's also some some ongoing definition going on that. Perhaps Paul or Steve would want to speak to one of the things we do want to do in the next phase is refine or build upon some of the assessment approaches we took for each of the components of value. In particular, I think we've got room to carry further what we've done going from a set of candidate risks associated with each architecture to actually assessing the, the likelihoods and the impacts of those risks and how that might then inform the assessment of the remaining architectures. I think we also want to potentially explore, especially some of the questions that have come up here. Things that may merit a little further investigation and how we define the science approach, the applications approach, and to Gerald's point, you know, coming back and reviewing the costs as we go forward. One thing that this chart was kind of built, I think a little earlier before the understanding of how the timeline for the overall SDC study has shifted. So one of the things that we're starting to think through now is how do we take advantage of the additional time we have to better understand these architectures, better assess them and better inform the decisions we want to make going forward. I'd also like to amplify Chris's comments there. Yes, so that the, you know, the addition of the, you know, the two years of the study and to also be able to take in the nice our lessons learned as Gerald had mentioned earlier. I think we're looking at, you know, how this comes together and probably not going to be going down to three architectures as early as November with an additional, you know, a couple years of study. We want everyone to keep everyone in mind that this, you know, preliminary investigation that we've done has taken a very high level, you know, at cost analysis and looking at engineering challenges that we have. And so there's going to be a lot of, you know, engineering study looking at more of the details of the implementation of these architectures and what they imply and how that might, you know, inform things going forward. And that process will be, you know, iterative as we go on throughout this study period, even into the, the extension of the study for the next couple of years. I'll chime in a little bit. I think the key. We lost the options. Hello, can you hear me? Yes. Yes. Sorry about that. Yeah. So I think the key element of the feasible options. Um, section there in the middle of the chart is the bottom line, the concurrent engineering studies. This, this approach, it's one of the things we're going to be doing, but it's, I think, where we're going to really look at these architectures with more specific engineering detail. These are, you know, not. They're not phase a kinds of studies where you have a team of. Many, many people drilling down into a specific design to requirements, but it's exploring the trade space with, you know. Groups of engineers who are expert in each of the domains in building a spacecraft and a mission and an instrument to some extent. And looking at, you know, further the costs trades, if there's any. Needs in the curve, so to speak of 1 technology versus another, some number of satellites versus another. And so forth. So that will help inform us further. On which of these options is most promising in parallel to that, of course, especially with this extension. We must look at partnerships almost all of these options with especially the multi satellite options. Will be, I suspect over the $500 million cap. So we need to really see what you can do to help us convince NASA and other partners that. By contributing satellites or pieces of satellites or. Constructing some sort of federation is in the best interest of the global science community. So we've got a lot of work to do in the next several years to really pull this program together as nice our flies. Are there additional questions that I have missed in the chat or other team members or. Our participants want to ask. Just raise their hand and we'll unmute you. I just want to make a clarification about L8A. So I saw that there. There is mentioned about the geohazard applications. So I just want to make sure that. You know, so L8A will make every 6 hour revisit, but if you missed a targeted area. You will make observation 10 days after that. So it's not like we can make every 6 hours any target that we decide at the time. We may miss it and we need to. So these 6 hours is just for 2 days. And then after that it's 10 days to get to get back to that point. We have other architectures like for example L58 that is more. Related to the geohazard application that basically do the observation every 2 days. So those are the kind of thing that we are thinking about using for geohazard. But for targeted area, if for example, what I think Paul mentioned for after earthquake, if we can wait like, I don't know. The worst case scenario 10 days or less than that to get to that point and then get every 6 hours. That should be fine. But we cannot make it any time that we want every 6 hours. Yeah, thank you for clarifying that. I said it rather in elegantly and I wasn't happy with what I said, but you definitely clarified what I was trying to say. Thank you. So on the same chart where the polar metric capability is quoted. That's just a pen going back to where we were before you started moving the slides. Yes, on this one. Does that mean with nice are you don't have quad pole all the time. So would it increase coverage or repeat capabilities. If you didn't have quad everywhere. I mean this is just a capability correct for a particular architecture. Yeah, yeah. So you're right. Jeannie that's the essentially the column and the polarization indicates the capability of the instrument. The specific mission observation plans, you know, can change. And Charlie can probably correct me in terms of what observation plans were used for the scoring of, for example, something like L1, L1C. And yeah. Yeah, that's right. It's the capability of the system, but the way that we designed these we use them. I mean different campaigns from nice are for example, if they specified we want quad pole for this region we want to all for this region with high resolution. This area with lower resolution, but I don't know quad pole and things like that we just use that those campaigns for our study. So this score coming are coming from that's if whenever it gets to be a real thing we can just define different campaigns for STC. But yeah, we use these polarization capability in our mission plan to. Yeah, so it will definitely matter in the overall mission design. As to whether quad pole is targeted or whether it's global, I think. If we had had no constraints on nice are we would. Have designed a system that could be operated with continuous coverage at quad pole everywhere all the time. In the widest bandwidth mode possible. And everybody in our science community would get exactly what they want. But there are always constraints on these things and the nice are quad pole performance. Is not compatible with the gap free coverage that most of the other disciplines most disciplines want this one's want. So we have lots of trades and nice are as we go forward with STC if this becomes a priority to get these measurements for applications and ecosystems. I think we've learned the lesson from nice are that we want to design a system that will satisfy everybody. Without the kinds of compromises we had to make that of course will raise the cost of the overall system because the compromises we took a nice are were basically cost driven. So that will have to be factored into the overall design trade space. Yeah, this drill I'd like to go ahead and kind of add within kind of this overall data, down linking data volume. What I'm about to quote came out actually in an article at ES in 2017 or Craig Dobson, my predecessor, analyze the amount of data that nice are is going to collect and data collected as well as immediate products. And if you added up everything that was in earth science data. As well as those in planetary so take a look at from mercury out to Pluto and 45 years of landsat nicer to be three times that volume in the first year. And so that's basically collecting all land all ice all the time with nice are and that was actually before we had satellite needs working group where we actually took our downlink from 26 terabits up to 35 terabits data per day. So what nice are going to produce the communities can be a vast a huge data set to work with. So part of the lessons learned that we're hoping from this is basically be able to understand what's possible, whatever the community needs. Scientists tend to have the tendency to go ahead and continue asking for more and more and more. Here we're going to be providing a lot more data. So we're having going to a fire hydrant versus a water fountain is more like being at the bottom like me with the floodgates opened up. It's not going to happen because they're in a drought mode right now but with that aside there's a lot of data coming your direction. So we've got to make sure the community is ready for it. And so I see this actually folding into how SDC will be able to go and involve. So with that I encourage this community to start getting ready for nice are take advantage of the day that's coming up. And some of that information will help fold into how SDC will without architecture studies will end up so I'm going to put on the community to go ahead and start looking closer at nice are and once we've got solid data coming in so that helps provide us guidance on what are some of the strengths and weaknesses that we should consider for SDC. Thank you, Gerald. Are there any other questions. There are no other questions. I have a quick comment that may be good to highlight. You know, we've we've posted the selected architectures we've had here and shown that it covers a fairly broad swath of the trade space that we had open. But just wanted to highlight that we there are 2 areas that we were considering in the trade space that we are not going to be focusing on going forward. One of those being the C band architectures with the, you know, the lack of penetration of foliage and the other systems out there. We felt that that wasn't adding enough value to a potential SDC mission and also the lowered inclination architectures that we had talked about also did not get any of those in here. Because the, you know, that the drawbacks in the performance versus, you know, what it would entail in terms of not having the sun available all the time and things of that nature. Let us to leave it out of this grouping. Well, with that, shall we wrap up this session and go on to the last slide, which is about future directions. And future meetings. Yep, that sounds great. Yep, thanks, Jeannie. While we're getting there, I was going to ask if Gerald or Paul had any last comments before we get to that last wrap up slide that sort of shows how to stay in contact with us and where you can hear more information about us later this year. Sure, I'll go. First, I'll just make this quick and then let Paul have the final word from the study. We see these meetings is actually very important. The ability to go ahead and one provide transparency into what we're doing with regard to the mission. So you see where we're going, where our thoughts are. But we are, I'll say a limited number team, we have a certain level of experience and at the end of the day, we're producing this mission that will go ahead and support the broader community. And that's you. And so this is where it's our chance to go in here from you, your chance to go and see what's going on and provide guidance. Yes, STC I'll say is probably just under a decade away, which is going to seem like forever. But with regard to that, we've got NYSAR coming around the corner and the lessons learned from NYSAR will directly feed into this. And so I'm hoping to have broader community involvement with NYSAR, but also with an eye to how can we go ahead and prove the NYSAR experience that fall within kind of the overall STC objective. Yeah, I really appreciate everybody staying on. We've got contact information that's up on the screen right now. I encourage you to go ahead and stay involved as we begin to go ahead and shape the mission that will fall in NYSAR. Paul. Yeah, of course, there's a lawn mower running outside my house right now. So it's that time for me to be summing up, but I'll just say I want to say thank you to everybody who joined this meeting. And asked great penetrating questions. As always, you're on top of it. The community is on top of it. We're trying our best to respond to the community. I also want to speak. Thank all the speakers who wonderful job of using the work that we've done. They've worked extremely hard. We've all worked extremely hard on this. But are very eager to make sure that we are responsive to community needs. So, if you see anything that's missing or something that you think should be amplified. We're still especially with our extra 2 years of study. We're, we're certainly willing to take it into consideration. Just 1 more time on the. Nice are given that we have 2 more. There will be a lot of nice our data coming in 2024. It seems like it's a long way away, but it's not. And for those data to influence our decision making on SDC. The sooner they can be analyzed and digested and recommendations can be formed. In order for us to make decisions on SDC, the better. So. I would like to 2nd Gerald's comment that. We need as a community to be ready to. Not just do great new science with nicer, but also. Look at those data and make some awesome conclusions that will lead us to. The nice our follow on that much quicker. So, thank you once again, everybody and looking forward to further interactions. Oh, also 1 more thing. Don't forget about the nice our work. Maybe this is coming up on the next slide. Here it is the nice our community workshop. That's in August. August 1st, I think it is. We will have a session on SDC during the last day just talking about updates at that point. And we are also considering an afternoon after the workshop. Where we could all get together and drill down more deeply and get your feedback. So please stay tuned for that and make plans. Great. Thanks everyone. Thanks Gerald. Thanks Paul. That concludes today's town hall. So we'll end the recording and it was asked earlier. We will have the recording and these slides up on our website probably in the next couple of weeks. And then we'll have updates on our website and again, you know, visit our webpage every so often please. Most of you probably got the invite through the SDC community listserv. If you were forwarded it and do want to join our listserv of the email address is right there. And then as Paul and Gerald alluded to, you know, the nice our community workshop, the core and AGU town hall meeting will have another town hall at AGU this year. To provide updates on this exciting time with NASA and the study team. So, all right. Thank you all. Thank you. Thank you Jordan for moderating and Jean for all the great. Moderation of questions. Thank you.