 All right, so first up just a couple of links to help you follow along. First, we are going to be looking at the neon data portal here data dot neon science.org. And then the tutorial that we're going to be following is at the second link. So that will be that tutorial contains the R code that we'll be working with. Just to give you a heads up that I will be going through that in a slightly different order, then it appears in the tutorial, but the content will be the same. It's just, you know, different different parts come first when I when I do this live. All right. So starting with data portal. All right, so this is data dot neon science.org. We don't have time in this webinar to go through everything that's available on the website and data portal in detail. So I just really want to encourage you to, you know, explore all of these menus. Later on on your own. There's all kinds of information about neon data collection about, you know, publications and research using neon about teaching resources and learning resources what else neon has to offer. So definitely check those out. But we are going to go to the data portal. And we are going to go to this first link to get data, which takes us to this explore data products page. So this is where you can find information about data products, the full neon data products catalog. And basically just explore everything that's available and then also download it. So the data product that we're going to use as our example to download is photosynthetically active radiation, which we can search for in this little universal search bar, just by searching for par. The, the data product par comes up first. There are actually, you know, several other data products that involve photosynthetically active radiation. But the search, you know, has a little bit of intelligence behind it to know that if you type in par this is this is probably what you're most interested in. Before we get into how to download these data. I do just want to point everyone to this data product details page, because I really really encourage folks to check this out for any data products that you're interested in working because this details page has a whole bunch of metadata and documentation about each data product. So just scrolling down to the documentation. There's this little document viewer on the page, where you can read through, you know, this whole list of documents that's available to the data product. If you keep scrolling down there's an issue log, which is, you know, tracks, if there have been, you know, unexpected issues with the data product. Again, you know, not a ton of time to go through this in detail, but really, really encourage you to keep these pages in mind when you're working with a new data product. So going back to the explore page. So I'm going to go straight to download data. And that takes me to this interactive page that helps guide me through downloading the data. So the first thing to look at is this availability grid. So this shows the availability of data across sites and months. So each of these little boxes represents one month at one site. The gray boxes indicate that there are no data available for that date and that site. And the blue boxes indicate that data are available. So here for par you can see that basically after, you know, a certain point in time, there's basically always data, but different sites became available at different times. So that pattern is very common with the sensor data for the observational data where, you know, people go out into the field and collect data manually. And it's very likely that, you know, we, we don't necessarily collect data in every month for every data product. And so this, this chart will be in many ways a little bit more informative for those less frequently collected data products. All right, so we are going to download data from the Wind River experimental forest. That's WREF. So we'll go along here to download the data. This is what we're going to start out working with in the tutorial. So select that site. And then we are going to get just the most recent couple of months of data. So this little chart, let's us select 2022. And now we need to select months. So we are going to get. So that was selecting the start date, July 2022. And then we are going to select an end date, which in this case is September 2022. I am guessing the September data are not actually online yet. But basically it's just saying, you know, that's, that's the last date of data that will be that will be considered for download. All right, so moving on. This next page asks if you want to include documentation. So we were just looking at documentation on the data product details page. If you select the box for include, then those documents will also be downloaded with your data. Here we're going to, we're going to include those, those documents. I generally recommend doing that sort of the first time you download any data product. Obviously, you don't need to do it every time because you'll just get the same copies of documents over and over again. Okay, then you get a choice for for most, although not all data products you get a choice between a basic and an expanded data package. So, in general, the expanded package is going to include more information about data quality. So in terms of sensor data. It's pretty consistent that the basic package has, you know, things like mean minimum maximum, you know, basic statistics from the data, and then also just a single sort of summary quality flag, whereas if you download expanded, you have many, many quality flags about, you know, what, what specific quality issues there might have been, you know, was there a spike in a data value, was there a null in a data value, those kinds of details. And the observational data, it tends to be things like, you know, if analyses were performed at a lab, what were the results of like, you know, standards run as unknowns at that lab those things tend to be in the expanded package. So for this tutorial, we're just going to download basic. But if you're, you know, interested in for more exhaustive analysis and you really want to know the information that's quality flags, you would do expanded. So let's check the box to say, yes, I agree to neon policies, and now it takes you to this final page where you can download the data. Before I click that button, I just want to note, there are these recommended citations on this page. So, definitely, if you download some neon data, and you write a paper about it. So using that is what we are trying to achieve. Use these data citations in your paper when you do that. Okay, so clicking the download data button. Claire, if I can just interrupt real quick also the estimated size of the download is in the top right corner. So that's something to be aware of in case you selected way too much. Yeah, so that that is super useful. And that is one reason why for tutorials I usually do the basic package because it keeps it much smaller. And you know we don't want to take forever here, but this can give you an idea of how long your download will take. Okay, so you can see at the bottom of this menu, what's been downloaded is a zip file. Now I'm just going to move these windows out of the way. And we can take a look at what is inside that zip file. Right. So, here's what I downloaded. Just unzip it and look inside that folder. Okay. So, what is in this folder. These PDFs that is because we checked that include documents checkbox. So those are things like, you know, algorithm documentation, the quick start guide, etc. Basically describing the processing that went into this data product. And then we have these three folders. So apparently, I was wrong and the September data have actually already been published. So these three folders are for each of the three months of data that we downloaded because we started our criteria in July so we got July August and September. And here this WERF from Wind River. So this is just when river data if we had downloaded multiple sites we would also have folders for each site. If you look inside those folders. You see, we have a whole bunch of comma separated files. And if you look in a little bit more detail you can see some of them are labeled one minute summer 30 minute. And then there's this sort of code of different numbers here. Basically what this represents is there are data that are averaged to once a minute and data that are averaged to once every 30 minutes. And then there are sensors at multiple levels on the tower. So you can actually see this in my computer background. So here's, you know, a sensor boom. Here's a sensor boom. Here's a sensor boom higher on the tower. This is a fairly short tower. In this picture when river is a taller one, but there is a par sensor on each of these levels. And so you're also getting files from each of those levels. Okay, so that is a lot of data files. And ideally, instead of having, you know, several folders of files and then several sets of files for sensors at different levels. So we would have it a little bit more aggregated. And so that is exactly what neon utilities is going to do for us is do that that aggregation to put the data into a little bit of a more tractable format. So at this point, we are going to move over to our studio and start seeing what the neon utilities package does for us. So if you can follow along here, that will be great as well. So I've just got a blank our script here. And this is now I'm going to be following, like I said in a slightly different order the code in the tutorial. So we start by loading the packages that we need. It was in the information on the website to install this neon utilities package before the webinar. If you didn't get a chance to do that. You just need to run install packages neon utilities. And then you can run this library line to actually bring neon utilities into your environment. So go ahead and do that. Sorry about that. I'm not running the install packages line because of course I already have neon utilities loaded the other package that we're going to need is the raster package. So loading that as well. And so now we are going to, like I said use neon utilities to bring together those files that we downloaded. So function to do that is called stack by table. If you're not familiar with this interface in our studio you, you know, start to type the function name, and it will very helpfully pop up that this is a function in the neon utilities package and here's some information about its inputs. So that can be useful. The only input that we need to give to stack by table is the file path to where that zip file is located. So I had put that on my desktop. So I just need to go to desktop neon par dot zip and run that line. So now it's going to give a little bit of information, a couple of progress bars as it does its thing. And so it tells us, all right, it stacked a table for the one minute data it's stacked a table for the 30 minute data, and then it tells us a little bit about where it saved those files. Okay, so what does that mean in terms of what did it actually do during those little progress bar activities. So if I go back to my desktop now. I still have a little neon par folder. If I open that up. Now, I still have all of those PDFs of the documentation, but now instead of having three folders for each of the months of data that I downloaded I just have this one folder that says stacked files. And what I have in there is one file with the data averaged to one minute. And one file with a 30 minute averaging interval. And then the rest of these are metadata files so sensor positions has information about you know the locations of the sensors on those booms variables we will look at in a second that has metadata about the data that we downloaded. So let's just take a look at what is in that 30 minute file. Alright, so what we've got here is, you can see from the start start data and end date so this is the start and end of each averaging intervals you can see there each 30 minutes for each of those 30 minutes, we have the mean minimum maximum variance sensor specific uncertainty. And so forth. And then on the left, we have the site ID and horizontal and vertical position so that horizontal and vertical position are the locations of those sensors there. And not the precise locations but sort of indices for the locations where those sensors are found. And so if you scroll down. You'll see, you know, okay we started at a vertical index of 10. Here it's, you know, 50 keep. Keep going 60 70 I think this tower, yeah it goes up to 80. So basically that's the eight levels of the tower at Wind River and the sensors found in each level. Okay, so basically what stack by table did was it took all of those individual files and merged them into this file and the one minute averaging interval file. So we indicated, you know, the source of that by giving the sensor location. And then the variables file is really helpful and I always recommend taking a look at that. Because it provides the definitions for each of those columns that we found in the data file. So, you know, here, we're looking at, okay, you know, maybe these are fairly intuitive par mean par minimum par maximum. But you know, you might encounter some field names or column names that are less intuitive. And you can find those here in this variables file so you can see that the column name of par mean has a description here and the arithmetic mean of photosynthetically active radiation. So this shows you the units. So, you know, standard for par this is in micromoles per square meter per second. And then whether it's whether that particular field is in the basic or expanded package. So we downloaded the basic package you can see here, all of these quality flags and quality metrics that we would have gotten in the expanded package. Okay. So I think looking at the time. I am not going to show you how. So, so right now, those data that we stacked are just stored on my desktop. So there is a way that you can use neon utilities to load those data into our where it will use the data type to sort of assign classes in our that function is called read table neon. So they're just going to skip that step in the interests of time, and instead show you, and utilities also has a method that basically does everything that we just did, plus the loading data into our. So it, it downloads data using the API, the neon application programming your face which is sort of machine to machine system. It does the stacking, and it loads the data into your our environment, basically all in one step. So, I'm going to show you how to do that. So to load exactly the same data that we just downloaded. I'm going to name this object PR. The function is load by product. And it needs a few more inputs, then stack by table so the first input is DP ID that stands for data product ID. The place where you find that is if we go back to the data portal. We can go to this explorer data products page that has the menu of all the data products, right under the data products name, you'll see this identifier. And that is the data product ID that load by product needs in order to know what to do. So DP 1.00024.001. So we go back to our. But we're going to enter here. So we are downloading our. We are downloading from just one site you can put multiple sites in here. But in this case, we're looking just for wind river. We want to get only certain dates we don't want to download all the part data that's ever been collected at wind river. So we're going to do 202.07 to end date. 202.09. The start date and end date only go to the year and the month, because that's the resolution of what's available to download from neon. Basically, if you, if you put it if you specified all the way to a day, it would still just download everything from that month. So we leave the day off, just so that that's clear. The download package is basic. And I think that is everything I needed to enter. I will find out in a second. So, okay. And the first thing it did was it said okay finding files, but now it pauses. And it says, hey, you're going to download 104 megabytes of files. Are you okay with that. And this, this step is here, kind of for the same reason that the download sizes displayed on the portal just as a check in case you're downloading a whole bunch of data and you didn't mean to, this gives you the opportunity to stop. There is an option to bypass this when you when you make the function call and I'll show you how to do that in a second. But here I'm just going to say yes, I want to proceed. So, now it gives me some more progress bars, saying that it is downloading. And now, now we're back to something we've seen before, where it says, hey, I'm stacking the one minute table. I'm stacking the 30 minute table. And now it's thinking for a minute. Yeah, there we go. And now we have this object PR. And I'm going to look at what is inside that object. And here, this should also be familiar. These are the same names as the objects that we had in what we downloaded to the desktop. So we've got a one minute table, a 30 minute table, the variables file that tells us about the contents of those sensor positions that gives us information about the locations of the sensors. And the read me an issue log. So, I can just take a look at what is inside there. So let's review the 30 minute file. You can see, again, this is the same thing that we got through sort of the other access method through going through the data portal. We have the same contents in this file. In this case, the file is now loaded into our our environment. And so, just to look at something a little bit more interesting than a bear spreadsheet. We can just plot a little bit of that data. So, let's see. I'm going to plot mean par as a function of start date time. The data source is going to be that 30 minute file. And we are going to subset it to just the 30 minute data where the vertical position is 80. And we're doing that just because that is the top of the tower. And it can be a little bit confusing to try to view, you know, par from many levels of the tower at once. And we're going to make that a line plot. So, there we go. We can, we can see if we expand this out quite a bit that, you know, you can see the diurnal cycle where sun comes up, sun goes down. It's actually a little bit easier to see if we compress it that you can also see, you know, as we move into the fall, the light intensity at the peak every day is declining. And also, when drivers in the Pacific Northwest, you can also see as we get into the fall, it gets a little cloudier. So that's also sort of a reassurance that, you know, the data that we've downloaded basically look like we expect them to So what I'm going to do next, this walked us through downloading and taking a quick look at sensor data. What I'm going to do next is walk through using what's available in neon utilities for doing the same basically downloading and taking a quick look at observational data, so data collected by humans and remote sensing data. Before I jump into that, any questions or or concerns about what we've done so far. All right, we'll keep going. Okay, so what we're going to look at next is observational data. In this case, we are going to download aquatic plant chemistry data. So going back to the data portal. See, yes. Okay, if I conveniently if I type aquatic that is actually the first data product that shows up in this case that's a coincidence. But that's convenient for us. So we're going to download aquatic plant for if I chemical properties, the data product ID is here and so that is what we need for downloading again using load by product. So I'm going to call this one AP chem for aquatic plant chemistry. I am again going to use the load by product function. The data product ID in this case is DP one two zero zero six three that we just saw on the explore data products page. In this case, we're going to download data from three sites. So that's going to be Prairie Lake, Suggs Lake, and to look like. So this is, I think, let's see. Prairie Lake is in the upper Midwest, Suggs I think is in Florida and to look is in Alaska. So we're, we're looking sort of across the range of sites that neon happens. In this case, we are going to download the expanded package. That actually has more to do with sort of a follow on tutorial that uses the same data that needs the expanded package. But so that's going to download everything that's available including the quality control information. So this time, I'm going to show you how to bypass that little question about that we had here of here's your file size do you want to proceed. So you can avoid that or sort of skip that by using this final input here that says check size. Except I typed true and actually we want check size equals false, because we're going to skip the size checking step. So use that with caution. It's obviously available, but definitely only use that if you're really sure that you know your download is a manageable size or that you're prepared to have the download take a long time. So we're going to run that. And see, it tells you it's downloading 23 files. Here it goes. And right, then did the stacking super fast. So this is pretty typical of the difference between the sensor data and the observational data. The observational data. They're, they're typically larger, because they, you know, have measurements at things like, you know, once every minute, whereas just inherently data that that are observed by humans are not going to be as frequent and as large. Because if you remember, we only downloaded three months of data for par. We downloaded three sites, but we just, you know, didn't specify a start date and an end date. So it's downloaded the data for all time. And that was still pretty fast. And it stacked the data pretty fast. All right, so what do we have in this object that we've downloaded. So again, some of this is going to be familiar. We have a variables file. So just like for par, this is going to give you the names and the definitions and the units for each of the column names in the data files, the data tables. We've got a read me that gives you sort of basic information about the data product. We also have a list of additional metadata files that we didn't have with the sensor data. So categorical codes, that gives information about the values of categorical data in the observational data files. And validation gives some information about how the data were validated when they were ingested into the neon system. So you can go into those in detail, but those, you know, can also be obviously useful to you. Okay, and then this these first five are the actual data tables. So let's take a look at the external lab data per sample table. Yeah, because I really, I just want to show you sort of, you know, what's what's similar and it's different between the sensor data and the observational data. So, you can see, you know, there are some similarities, there's, you know, a site ID that tells you what site the data were collected at, there's a date. And this is, you know, sometimes typical sometimes not for the observational data that it just gives a day and doesn't go, you know, down to the minute or second, the way that the sensor data do. And again, that is, that is just because of the nature of human data data collection that, you know, in some cases but often in some cases it's that precise but often it's just, you know, we went and collected samples on this day. So then scrolling to the right, you have information about the sample. So this is the sample identifier. It's very reassuring that all of these have condition okay in the sample condition so that can flag for you. If there was, you know, a problem with the sample during collection or processing, it tells you where those samples were processed. And then scrolling out here we get to the actual data. So, here we've got an analyte which is either carbon or nitrogen if we scroll down a bit. Yes, we will get to some that also have stable isotope ratios for 13C and 15N. And then the analyte concentration gives you the actual analyte value for the analyte that's being tested in each of these rows of data. Okay. So, what we can do with that. Going back to, let's look at something a little bit more interesting than a spreadsheet. We can take a look at the, let's check out the 13C content of plants at the different sites. So, that means we're going to look at the analyte concentration versus site ID. And the data source is going to be the aquatic plant chemistry, plant external lab data per sample. We are going to subset that to just the 13C data and see what that looks like. And so, this shows us sort of a range of carbon isotope concentrations at the three different sites. I think I need to expand this out a little bit to get that. Yeah, the middle site name. And then we can see, you know, later at Prairie Lake and, sorry, the way around, more enriched at Prairie Lake, and later in 13C at Suggs and Tulick. So, what, you know, you might really like to see is, okay, you know, how does that map onto the different species that you find at those different sites. And so this is where we get into, we've got, you know, these multiple data tables. So, if we go back to looking at this, the analytes from the lab that we downloaded, and we scroll across. You can see this doesn't tell us what species each of those samples came from. But if we look at the biomass table and scroll over here, we find the scientific name for, again, each of those sample IDs, which are, you know, the corresponding sample IDs that we saw in the analyte table. So, what we need to do is bring those two tables together, and then we'll be able to see what the 13C concentration was for each of those species. So, just to sort of back up a little bit, that's really typical of what you're going to find in the observational data. So, the data tables for neon observational data typically represent sort of a sampling activity that took place at a particular time. So, you know, the clip harvest table, okay, they went out and collected a bunch of aquatic plants. They, you know, weighed those to get biomass, and then sent them out for analysis at a lab to get the carbon and nitrogen analytes. The other two tables are the quality control tables from the laboratory information. And so, it's very typical that you would have to bring together data from multiple tables often to do analyses of observational data, because often, you know, the information that you're looking for was collected at different times, from different people, and ended up in different data tables. So they're just, they're just relational tables. Okay. So, before I do that step, there is a way to make some of these getting up the tables a little bit simpler. So, I've been using, you know, the name of the object that we identified, and the dollar sign to get to the tables that are, you know, within this list object. There is a command that can let you bring those tables into the environment individually. So, that is list to environment, which is what it sounds like we have a named list and we want to bring the elements of that list directly into the environment. We want to bring the objects from the list AP chem into the global environment. So I run that. And now you can see in my environment, I have individual objects for each of those tables. So, you know, obviously, you can do this either way. A lot of people find that this makes it a little bit more convenient. So, what we're going to do now is merge the two tables. So, I'm naming this new object, a PCT, and I'm going to use the merge function. You can also use the table joining functions from deep fire if you're more familiar with those, they do basically the same thing. So I'm going to merge the biomass table and the plant external lab data per sample. And I'm going to do the join on sample ID, because that is the common identifier between those two tables. So I'm going to include the name location domain ID and site ID. Those are not strictly necessary, but including them in what you joined by keeps the system from duplicating those columns. So you don't have to have basically one from each of the original tables. So you might be thinking, how did I know to do that. So, going back to the data product catalog and the product details page, if I scroll down here to this quick start guide, and look on the second page of the quick start guide. There's a little table of instructions for table joining. And it says, All right, you've got the aquatic plant biomass table and the aquatic plants external lab table. And what am I joining by. Interesting. It's telling me that the name of the field in the biomass table is chem sub sample ID, and the name of the field in the external lab data table is sample ID, which is actually not what I just did. And I indicated that it would be named sample ID in both tables. I think what is going on and the difference there is that the same sub sample ID, and the sample ID actually have the same value there, they're, you know, like the actual string of the identifier is the same. And so it's working either way. Because I know that this tutorial works. But if you wanted to be precise, you can specify this, this by can be specific to the, the name of the field in each of the tables they don't actually have to be identical. Okay. So, I have a merged table, which I can take a look at. And here. So the, the fields that you join on will be on the far left. So, here's the sample ID. And yes, confirming that this chem sub sample ID is identical, which is why that that difference wasn't actually significant. And you can see here, we've got the scientific name from biomass table and all of the columns from the biomass table. And then we scroll over and now we also have all of the columns from the chemical analyte table. So, that enables us now to make a plot of 13 C as a function of species, rather than by site. So, we want an analyte concentration by scientific name. Data coming from that merged table that we just made. And we are again, going to subset to just the 13 C data. There we go. So, if I expand this out. Okay, I'm going to make one other change here, which is LAS equals to that is just going to turn the access labels vertical to make it a little bit more visible. So, well, I could, I could fuss with the size a little bit more to make that more legible but basically, you can now see the isotope concentration per species. And you can see that there's just a small number of species that are driving those. Those cases where the 13 C is more enriched. All right. So, that is just kind of a, you know, again, like like the par data a very quick example of what it looks like to work with the observational data. So now I'm just going to quickly demo downloading and taking a look at the remote sensing data. So, so the remote sensing data are a little bit different. So both the sensor data and the observational data. We got in these, these tabular data files, so tabular, meaning, you know, we've got rows and columns. And, you know, they, they contain all the information the remote sensing data. Depending on on which data product you're accessing may become in various formats. Some of them are HDF five files which is this sort of nested self describing tables. And a lot of them are in rasters, where you have different layers of information. You know contained in one object. So, so the functions that you use in the on utilities to access those data are different. There's a function called by file AOP, which downloads everything for a particular date and site. So here we are going to look at by tile AOP, which downloads a single tile from the mosaic data products. So this is only only a subset of the AOP data are available in tiles. In the tutorial again because of the download size, because the AP data are really, really large. And so downloading everything for a date and site is sort of unmanageable in in the timeframe that we have in a tutorial. So we're just going to download a single tile of data and a tile in this case is one kilometer by one kilometer. So, we. So the first input is the same as we had in load by product is the data product identifier. What we're going to download is the canopy height model data product, also known as ecosystem structure. That data product has the identifier DP3.30015.001. We are going to download from Wind River. Again, so WREF. Instead of a start date and an end date, we just input a year. So we are going to download 2017 data. And in this case, because again, this is a tutorial, I'm just pre populating the coordinates of the tile that we're going to download. So, so this function can take Easting and northing as inputs. It can also take a set of pairs so a vector Easting and a best vector of northing that match up as its inputs. So basically, you would get these from, you know, having information on the ground about the locations where you want to download the AOP data. And so I'm just pre populating in this case. And if you're, if you're interested in sort of the process of how you would get that information. I can point you to another tutorial that goes through that in much more detail. The final input that we need that we didn't have in load by product is save path. So this is a file path on your machine where you want to save the data. So this is basically this function doesn't load data directly into R. And you can see that because you know we haven't assigned the output of this to anything. You can download something locally. And again that's that's really just because of file management and file sizes. So, for me, I'm just going to download directly to my desktop again, but you, you know, on your machines, you can download to whatever location works for you. Okay, so I'm going to run that. And then I'm going to get the little message that says, Hey, this is the size of what you're trying to download. In this case it's only four megabytes. And so, yes, we want to proceed. And again that's because this is a, you know, a one kilometer tile. If you download everything for a site it's it's not unusual in AOP to be talking about gigabytes of data. Okay. So this tells me, you know, where that was downloaded. And now I can, you know, now that it's available locally I can read it into R. So I'm just going to be up here. I'm just going to name this object CHM for canopy type model. I'm going to use the raster function in the raster package to read that in. And this is where if you use are a lot. You're going to be familiar with this that it has this tab complete feature where, you know, I start writing out a file path. I hit tab. It starts showing me the contents of that file. And you will see, as we go through here, there are a whole lot of nested folders here, and you can just use the tab complete to keep digging down through those nested folders, all the way to the file at the bottom of the nest, and that's, that's how you know you've reached the end is that instead of, you know, hitting tab and getting another folder name that ends in a slash, there's no slash we had a dot tiff we have reached an actual file. So, I run that line. And now I have an object called CHM. And I can just say, plot that CHM object. And it gives me a little picture. So this is the canopy height in that tile that we downloaded where this, you know, this pink value is short stature and the greens are tall. This is in meters. So you can see, you know, again, when river in the Pacific Northwest has some very tall trees. But there is some sort of, you know, clear cut situation here in the corner, where the trees are much, much shorter. And this, this little error thing here, this is just our studio sometimes gets unhappy if you have a very, very small plot window. So that's, that's all that it was complaining about there. So, before, before we break for questions. I just want to go back to, to link you to sort of where you can go to learn more. This was, you know, a very basic introduction to, you know, how to access and, you know, basics of how to navigate each of the three data types that neon data sensor observational remote sensing data. But obviously there's much, much more to learn. So we have more than 100 tutorials on the neon website. This series that's linked here is a set of tutorials that we recommend for as you're getting started. And so you'll see if you go there, there's kind of a block of introductory tutorials. That, you know, go a little bit farther than this one. And then there's sort of a pick and choose, you know, tutorials that are more specific to specific kinds of data, but that are still fairly introductory and designed for, for people who are getting, just getting familiar with neon data.