 All right. So everybody should be seeing the neon science.org homepage. And there's one thing that I want to point out to you guys on this homepage. If you plan on using the data portal that I think will be useful for you. There's this strip across here that's labeled the latest from neon. The top one talks about updates and links to updates and changes to our data portal. It talks about data releases, as well as updates to major packages like neon utilities, which we will be going over and using during this workshop. The bottom one talks about our observatory status page. This is a website for up to date information about our actions in response to COVID-19 and natural disasters, including fields like closures and reopenings. We do have some sites that are subject to tropical storms, particularly our sites in Puerto Rico. So we have had closures associated with that. If you go to that page, it's going to give you all the information you're going to need on our response to COVID-19 and on all of the different sites and domains that are either open or closed. So you can see status as of June 14th, which ones are open. Also something else on this page that's pretty important is this link right here, the one that says COVID-19 impacts on 2020 neon data collection and data products. If you plan on using our data from 2020, it's likely that it was probably impacted by COVID-19. So if you're noticing a gap in the data, I would definitely check out this link to see whether or not COVID-19 did in fact cause that gap. Okay, so let's go back to the homepage. So now I'm going to take you through a few of these headers before I get to the data portal, because I think a lot of them are super useful. So first we'll start in about us. So this overview tab if you need to learn more about neon and the history and the design and management of neon, this is the place to go. But I think this advisory groups tab is going to be of more interest to you guys. There are two different types of advisory group, advisory groups. The first I'm going to cover is the science technology and education advisory committee. So this is made up of experts in ecology and multiple fields that are external to neon, and they advise, but tell, which manages neon and neon program staff on the planning and operation of neon in general, and gives a strategic guidance. associated with how we can succeed and how we can prioritize our activities. So if you are a neon super user and you're super invested in neon, I recommend checking this out and learning more about this advisory group. The other advisory groups group we have are our technical working groups. So these are made up of primarily external experts, but also some neon staff. And they advise us on a variety of things, in particular data collection and processing methods and neon infrastructure so we have technical working groups on specific data products like fish. We also have technical working groups on community engagement. So I help lead the community engagement one and we talk about how we can better engage with the community and it's really useful to have external advice from folks outside of neon so if you're interested in advising neon and you're an expert in the data product, I would check this out. The last thing I want to point out and about us is this contact us page. If you scroll down here we have this contact us form. And this is not just any ordinary form when you fill this out actually generates a service ticket and it's sent to a neon team and then a specific neon scientist to answer your question. So if you have a question about neon, you can always reach out to Claire and I but I think this form will get you to the correct person right away in a more efficient manner. And what you can put in this form you can do anything from hey I'd like to do a site visit or do some type of engagement activity to you with you to. I have this very specific question about a line of code in this tutorial and it will get to the correct person. So I highly recommend using this form, if you have questions. So I'm going to skip beyond the data and samples pit tab because we're going to go to that when we go over the data portal, but I want to point out this field sites tab, we have this really great explore field sites page. And then we have this pretty cool map view where if you zoom in, you can see all that was zooming out all of our different field sites across the United States. And if you click on an icon. So let's click on whom, which is our site in Hawaii. You can learn a little bit more about the site. And if you hit the site details it's going to take you to that specific site page. You can also search for a site in table view. So I'm going to search for that site code and it's going to come up right here. The other thing I want to point out on this page is this download download field site table CSV. So this it's going to go into, you know, your spreadsheet environment likely Microsoft Excel for y'all, and it's going to cover all of the metadata for all of our field sites in one place it's a really great snapshot. It also has information in it on permitting and how to access our field sites. So if you want to learn about all of our field sites in one snapshot this is, this is the spreadsheet to download. Okay, so I am not going to cover impact resources or get involved because I'm going to be talking about those on Thursday during my educational resources talk on how to bring neon into your classroom. But I do want to point out that under this resources tab we have a learning hub, and we have tutorials. And through the course of this workshop we're going to be working with a lot of different tutorials. Particularly for all those breakout sessions. So just want to let you know that they are within this tab resources and within learning hub. And they are a wonderful resource that I'm excited to work with with you guys. All right. So now let's get into a little bit more on the data. I'm going to take a look at some of the links out of this. So, if we go to the data and samples tab. You can see the data portals up here, and I'll get to that in a minute, I promise. But first I want to cover some other really great web pages associated with our data portal. And I want to point out that a lot of these links to web pages are actually within the data portal as well. This is just another way to navigate to them. So first I want to take you to the samples and specimens page. So, if you were able to come to our introductory webinar on neon, you would know that not only do we have a data portal with over 180 data products, but we also have a bio repository. So we archive biological, genomic and geological samples and specimens from our terrestrial and aquatic sites. And many of them are stored at the Neon Biorepository, which is managed by Arizona State University. And so this webpage will tell you kind of how to find them all the different samples we have. It'll tell you how to request access samples. But I think if you're going to remember anything from this webpage, it's this biorepository link. So if you click on it, it's going to take you to the biorepository data portal, which is kind of similar to our, the neon data portal. Only this is focused solely on samples and specimens. And you can use this portal to search for samples and specimens. So I want to point out that we have over 103,000 samples from over 700 TACSA. So there's so much for you guys to look at and explore here. And if you're curious about it, we have the contact information you need, you can always contact Claire and myself as well. Alright, let's jump in for a second and also say those of you who are signed up for the biorepository breakout session will also hear much more about that at that time. Yes, thank you so much Claire. I forgot to mention we do have a breakout session coming up. Okay, so back to the data and samples page or time. I'm not going to go into these pages because we have tons of collection methods pages. But we have kind of website pages on the sampling design and sampling frequency for all of our science system, airborne remote sensing automated instruments and observations and an observational sampling, as well as very specific pages on our sampling design and collection frequency of these kind of broad data products. I want to point out that we have even more specific information on the data product details page of every specific data product. And for more kind of general information on our sampling design and collection frequency, these would be the pages to go to but if you're looking for the exact methods of how we collected data for a specific data product we're going to go to you're going to go to the data product details page, which I will get into because that is within the data portal. What else under data and samples that's really important for you guys to know about is this data policies and citation guidelines page. So first and foremost, it points out that we follow fair data principles, meaning that our data are findable accessible interoperable and reusable. But secondly, it should it talks about our data usage policy, as well as in I know kind of scrolling quickly. How to acknowledge and site neon, because you all the information you're going to need for how to site downloaded data, how to site samples and specimens, all of our documents are code packages educational resources it's got kind of everything to make citing our information as easy as possible. And one thing that we asked is that when you use our data and publish with it that you cite us. And so more information about that is on this page. So next, I'm going to go into data management, which is going to be pretty useful for you guys when you're using our data. So first is this data availability page. And so this talks about how quickly new data will be published after it's collected. And it goes over kind of that data latency period for the observational system, the, the instrument system and the urban remote sensing system. So typically, our data from our instrument systems are available quicker than like our observational or airborne remote sensing systems. And so it just gives you a general overview of the backlog of data and how long it will likely take to be processed for when it's available on our data portal back to data management data formats and conventions. So we have a lot of data. And that means that we had to come up with a system to format it and to name all of it. And so it got pretty complicated. And so this page gives you a general overview of our formats and conventions. So this talks about our different file names and how we set up our packages. It talks about our different abbreviations because we have a lot of abbreviations associated with neon in general, and our data products. For example, we had to have timestamp abbreviations. So this is a good page to go to if you're curious about kind of all of those conventions and formats in a general manner. You can learn about more specific kind of naming conventions within that data product details page. Continuing data management. I am not going to be talking about data processing because we will have a talk on that by another neon scientist. However, I just want to point out that it is within this data management tab also within the data portal. Before we get to the data portal the last thing I'm going to talk about is data quality. So this web page goes over a neon general data quality program, and how we do quality assurance and quality control here at neon. So this is a general overview. If you are interested in specific data quality issues for a specific data product. So let's say you notice an outlier in your data. When you're analyzing it you're going to want to look at the data product details page and the files within that on our quality flags this is would not be the place to go. This is just kind of your general overview of how we do data quality here at neon. Now that we've kind of run through pretty much everything in data and samples that's not within the data portal. I'm going to head into the data portal. So multiple web pages within here. We're going to go when we actually want to download and look for data products we're going to go into this explore data products page. So first I'm going to point out a few other pages that are likely going to be really useful to you. So first is the spatial spatial data and maps page. So we have spatial data layers and maps that are openly available for you to download. These include shape files KMZ files and printable maps shape files can be open in GIS programs. Shape files can be opened directly into Google Earth. All of this data is not in our explore data products pages in this separate area because it just didn't fit. So if you're interested in neon geographic data. This is the page to go to and I recommend heading here. All right, back to the data data portal. This is the document library. This is the place to go if you want to find any documentation about neon. We have some general documentation on our science designs and kind of you know how neon was set up and the history of neon. We have documentation on our algorithms. We have site characterization reports meaning how the site was set up right when we got there, why we chose that site. We have that spatial data here as well documentation on that. But there's two types of documentation that I think are probably the most useful to you guys. First, the data product user guides, which are all on that data product details page that I keep bringing up that I promise we will get to. And second, our protocols. So our protocols are essentially methods documents for observational system data collection. So for aquatic data collection and our terrestrial data collection. These are really great if you have, you know worked with the data, you're trying to publish and you're writing up your method section and you're like wait, how did they actually collect the data this you can use these documents to help you write that section. However, something I want to point out is we revised our protocols. We include all of our past versions on our website. So, if you search for a particular protocol it's going to bring up past versions and newer versions as well so for example if I search mammal for small mammal trapping protocol. We're going to get multiple small mammal trapping protocol documents. We know it's the same protocol because the number is 00481. Every single document here says 00481 so they're all the protocol for small mammal trapping, but we have different versions indicated by this letter. And so all of these versions are old versions. And so when you are citing neon and when you are writing your method section, you're going to want to use the newest version. And so you're going to look for the kind of latest letter in the alphabet, which I think for this one, it is L. Okay, hopefully everyone is still with me. We are going to head back to neon science and back within this data portal tab. The last thing I'm going to talk about before we really get into exploring data products is this API page. So API meaning application programming interface. Our API can be used to quickly access data, as well as information about our data products samples and sampling locations. It's a great efficient way to get our data and kind of all the information that you need to know about our API and packages associated with it are going to be in here. I would say, if you are a power user and you're interested in doing things beyond what our API is capable of, this is the place to go. If you are not, then it's a lot of information that you wouldn't necessarily need to look through. But if you are curious, we do have a whole section on it ready for you to check out. All right. Now that we've been through kind of all that great background information, we're actually going to get into looking for data products. So I'm going to go back within our data portal and I'm going to go to the explore data products page. And this is where the magic happens. This is where you can look for and download our data products. So as you can see, we have these tiles right here. So these, these tiles are specific data products. And on the left, we have filters. So here, you can see we can filter by our science team. And so we have airborne observation platform, which is our remote sensing data, aquatic instrument system, aquatic observation system, and trust observation system. We have our different sites, states, domains, and even different themes. So if you're curious about atmospheric data products, you can click that. And if you filter by this, you'll get a variety of different types of tiles, each tile being a specific data product. Something I want to point out before we actually look for a data product is this data product catalog right here. So let's download it and check it out. So you guys can see what it is. Do do do while it's loading. This is pretty much a snapshot of the metadata for every data product we have. So I'm going to go through it column by column. On the left, we have our data product ID. Every single data product we have has a very unique ID to it. The first two letters in number are the level number. So here we can see one, and that's a one here too. And that's an indication of how refined that data product is. So, meaning some of our data products take a lot of refinement and working with to get it into a format that you guys can actually use. So a lot of our IS data doesn't need that that instrument system data like wind speed, this top one. But most of our remote sensing data, we need to work with it a lot to get it in a format to where you guys can actually download it so it will actually fit on your computer and you can use it. And so we have to revise that data quite a bit. And so, as you can see this remote sensing project spectrometer one from that airborne system is a level three, meaning we've refined that quite a bit more. And then we also have this five digit code. That's just a unique code for the data product. And then, lastly, we have this kind of three digit code back here the 001. That is whether or not the data product has been majorly revised. I don't know of a data product that has been revised yet. I'm thinking that the pathogen data product might be soon for small mammals. Claire, can you speak on that at all. Yeah, you have it exactly right so we have, we have a data product revision that is imminent for the rodent pathogen data product. Basically, we're shifting from haunted viruses to tick-borne diseases in terms of what we're looking for in those rodent blood samples. And that's kind of like the scale of change that would be a data product revision. So it's really, you know, the data from before and after the change are not really comparable like we're doing something different in terms of exactly what we're targeting with those rodent pathogen. So, or rodent-born pathogens. So, the, at least the plan is that making that kind of revision should be pretty infrequent. So yeah, so right now you'll see all of those end in .001. Soon there will be one that ends in a .002, but you won't see a ton of those. Thanks for clarifying that, Claire. So that's what the data product ID means. Then next in this we see level, which like I said refers to that level number, the name of the product, whether or not it's available, the URL to that data product details page. The first month it was available. So some of our products have been available since 2013 to the latest month available. The science system it's under and then a quick description of it. So this is just like I said a very quick snapshot of the metadata associated with our data products. Okay. So let's look for a data product. I do think they're useful. However, personally, I think the search filter is going to get you to where you need to be quicker if you're looking for a specific data product. So today we're going to be working with photosynthetic active radiation, which is just essentially the amount of light available to plants for photosynthesis. So when you're thinking about it, it's going to be the highest in the summer and in the middle of the day, when the sunlight is the most intense and the lowest to practically nothing at night and lower in the winter. So, photosynthetic active radiation is also known as par. So I'm just going to search par and see what comes up. So it says 118 products from 81 sites, and that's because I'm searching for anything that has these letters in it. So we see lots of different data products coming up here. However, the one that we want is this first one photosynthetically active radiation. We're going to go to that data products details page that I keep talking about. So that's right here with the information sign next to it. I'm going to click on it, and it's going to load that page. If you are doing anything with one of our data products, I highly, highly recommend just looking through this page. It's going to give you all the information you're going to need to know about that data product and it's going to make it so much simpler if you spend some time on this page. It has pretty much everything you need to know it has that unique data product ID, a description and abstract, it tells you what science system it's part of. So because this is a sensor, it's part of our terrestrial instrument system. It tells you when it's been available so we first started collecting data on this in 2013 and we're still collecting it. So you can download and copy this citation and save it for later for when you publish so you don't even have to worry about making that yourself. It goes over our collection and processing of this data product, so you can get a better idea of how we do that. Then we have this really great documentation system. I urge you guys to please spend some time with these documents for any data product you're using. That's because they go over really important quality information and methods information that will be useful to you for when you're analyzing the data. So here because it's an instrument system data product we talk about our algorithms and we really get into QAQC and different quality flags. But for a lot of our observational system data, like that small mammal trapping one that I showed you earlier will also include those protocol documents as our methods. Anyways, I really, really highly recommend you guys reading these. It will be really useful for you for when you start actually analyzing the data. And speaking of any issues or quality flags, we have this issue log table. And so this is going to tell you about any issue that we might have had at any site for a sensor associated with photosynthetic active radiation. So if we click on this one, we can see that at this site there was a fallen tree that destroyed the sensor mount and it affected the data for 80 days. So if you're noticing a gap in the data or some irregular data, you can see why and trace it back to this issue log and you can see what exact site it was and the exact date. So this is really useful if you're trying to get a better picture of why your data might have quality flags on it. We're actually going to skip this next availability and download section, but I just want to point out that you can download the data from this page as well. Right here. So we skip this section. We're going to get into this visualization section. So I think these are so cool. We have these time series visualizations for some of our instrument systems as well as our airborne remote sensing system data products. And why they are cool is they give you a look at what this data looks like without having to go through downloading it and and like working with it. So you can get a taste of you know what am I actually looking at. So for photosynthetic active radiation on the y axis we have micromoles per square meter per second because that's what we're measuring it in and on the x axis we have the date. And if we're looking at this we see these peaks and valleys, which makes total sense with photosynthetic active radiation because we have none at night because it's nighttime there's no sun. And then we have the peaks during the day when the sun is at its highest. So it's pretty cool to be able to actually view this data. The other really cool thing about these visualizations is you can do a lot with them and filter them. So let's add sites. So let's add a site let's add tell a day at National Forest in Alabama. And so that's versus our Abbey Road site, which is the blue one in Washington, and we can see that tell dig a national forest that site has a lot more photosynthetic active radiation than our Abbey Road site which would make sense because Abbey Road sites in the state is probably more cloud cover. Whereas this one in Alabama probably definitely sees more sunlight. You can also filter the date range. You can also add quality flags. So let's see if any of this data had any quality flags. I just want to point out that there are so many different options with these with these visualizations and I think they're really nice. Just be able to see what the data looks like without even having to download it. So we're going to look at one more visualization before we actually start downloading data. So I'm going to go back to that explore data products page. And I'm going to click this AOP data viewer. For those of you who are interested in AOP data, I think you're going to find these really, really cool and interesting. So we have a similar data viewer that we did for that time series instrument system data for our airborne remote sensing products. If you click AOP data viewer for this data product the high resolution ortho rectified camera imagery. It's going to take you to this page. And here it's kind of hard to see if you're zoomed out you know what what am I looking at if you zoom in. You can see you're looking at a landscape and so this is from that remote sensing data. It's images of the landscape and you can zoom in really far and it's super cool and why this is neat is because AOP data takes a lot of space. On your computer it takes a lot to download and it's kind of complicated so be so being able to see the data just right now in real time is pretty great. And I just want to point out that you can change your site so here we're looking at that site in the Pacific Northwest Abbey, but we can look at other sites let's take a look at Moab. And that's, you know, very different. It's a canyon site, a lot less vegetation you can see the rivers running through it. Very neat. All right, let's check out that other AOP data viewer for vegetation industries from the spectrometer. Because I think it is just super cool and I saw that quite a few people were interested in our remote sensing data products. So vegetation industries. A lot of them are just asking you know how green is the site. So let's go back to Moab. Because you think it's probably not super green. So let's choose NDBI down here, because that is a measure of greenness. And wow, looks really cool so we have, if we zoom in, we can see we have these big dark spots, basically saying there's not a lot of vegetation in this spot it's not very green. We can see these light spots all along the waterways which would make sense. That's probably where the plant life can live in kind of a almost desert ecosystem in Moab. So anyways, I think these are super cool. They're really nice to be able to see the data without having to download it, especially for this remote sensing data because it's so big. So let's go back to the explore data products page. Okay, so remember earlier when I said, the last 10 minutes of this is going to be all of us working together to download this data product from this page we are at that point. So, I would like you guys all to get to the data.neonscience.org data products page. Where would you mind throwing that link in the chat so everyone can get there. And I'm going to give you a minute to get there and stop talking. And then we'll get started with downloading the data. Just going to give everyone a few more seconds just because this is a really important part, you need to do this step in order to participate in the next step. So everyone can get to this explore data products page. Okay, hopefully everyone has made it here. If not, Claire is likely put the link in the chat so you can get there. The first thing we're going to do is make our way over to the search bar. And I would like everyone here to search par par. Hopefully everyone has gone through that step. And we're going to go to this first box. Don't worry about any of the other boxes just this first one under sort. And we are going to click download data. So everyone see this blue box right here, click download data. When you click it you should be seeing this page that I'm sharing on my screen that says configure data for download. And from this page what you're going to do is you're going to select the sites and dates that you want to download data from. And you're going to see if you scroll down. We have a kind of big block of all these little gray and blue blocks. On the left, we have all of our field sites on the top. We have the years that they're that the data is available from a blue box means that that month during that year at that field site we have data available. The gray box means that that data is not available, likely because either a quality issue, or a, or the site just wasn't constructed yet. And what's kind of cool, speaking of construction about this is that you can see the construction of the observatory based on when data are available. So some of our sites started much earlier than other sites. So here, super central Plains environmental research station in Colorado was one of our first sites, you can see that this data was available in 2014. But some of our other sites were not available. So it wasn't weren't constructed as quickly BLDE, which is in Yellowstone, and then our site in Hawaii wasn't constructed and ready and getting data available until 2019. So, let's pick some sites. So I'm going to have you guys scroll all the way down. I'm going to pick wood and ref. So that not yell on the second to last two sites, would unwrapped and I just want you to click on them, and your screen should look like this, it should be highlighted in blue for both of those sites. So now we have the sites we want what is in North Dakota, ref is in the Pacific Northwest. So two very different sites. Go up and select the dates we want. So right now you can see the estimated size 2.43 gigabytes. This is a lot. And I don't think we need to do this many months of data. So we're all going to scroll up here and look at this date range and choose three months, September 2019 through November 2019. So if everyone could click on this start. And you're going to go to this year right here above the month and select 2019, and then select okay. And it should say September 2019. Then we're going to click the end box, select no select the year, we want 2019. Then we're going to go to the month, click on May, and select November. Okay, so I'm going to give everyone a few seconds to make sure that they have the correct dates and sites selected. You should have September 2019 in the start box November 2019 in the end box. Then you should have wood and ref selected. And something else I want to point out while others are making sure that they have those selected is that you can see the estimated size of what you're downloading right here. So you know exactly how much you're going to be putting on your computer. Okay, hopefully everyone has selected those we're going to hit next. Okay, so now we come up with this question, do you want to include documentation. So these are all just relevant documents, most of which I pointed out to you on the data product details page that will be useful for you while you're analyzing the data. So I'm going to have you guys click include. Make sure that this has a blue circle under the include, and then we're going to hit next. All right, then we have a question which package type do you want basic includes the data product summary statistics expanded uncertainty and final quality fly expanded includes the basic package information, all of that, as well as quality metrics for all of the quality control analysis so expanded gives you a lot of information about all the different quality flags. We're going to click expanded. So make sure you do that. All right. And then you see this page says agree to policies. So in order to download our data you should you need to agree if our data usage and citation policies which I talked about earlier, but basically what you're agreeing to here is that you will cite our data. When you use it. So we're going to click. Yes, I agree to the neon data usage and citation policies. And that's going to take you to a summary page. So we're going to see what we're downloading the data product we're down the data product ID number, the sites and date range the fact that we are including documentation, the package type. Make sure that these are all selected on your screen. We also have some handy links over here to our file naming and conventions to help you understand when you download the data what those are. So once again we include that information on how to cite this data product right here and you can copy it when you need to, but for now. We should just make sure that we have the two sites September 2019 to November 2019, we're including documentation, and that we're choosing the expanded package type. Alright, if everybody's ready, we're just going to click download data, and that ends my portion of this talk today. So I think we'll give you another minute to make sure that you are caught up that you've downloaded the correct data. And then we have some time for a few questions before we go on break. So what we're going to do now is actually start working with the data. Again, if you, if you have questions while I'm talking feel free to put them in the chat, and Marie will take a look at those. So, I'm going to share my screen, and you will just be seeing my desktop. So before we move into our first we're just going to take a look at that file of par data that we just downloaded. So you all should have this neon par dot zip file. You don't need to follow along for this part. This is just going to be me showing you what's in this zip file before we we move into our to work with it. So if I unzip that and take a look at what is in that folder. So what you'll see is a bunch of sub folders, and then a handful of PDF documents. So if you remember, when we were going through the download steps on the data portal. So let's select the box for include documentation. So that's what these PDF files are. So these are all of, you know, the documents that are available that are relevant to the data product that we downloaded. So, for the one that I just opened this one is explaining how quality flags are applied to the instrumented data products. So this is going to be the same as that list of files that was shown on the data product details page. Okay, so then we've got these six sub folders. And if you look at the names of those folders. So you can see what you see if you looked at the file naming convention page. But some of it, you can kind of figure out just from looking at these. So you can see three of these sub folders, say, would for the Woodward site, three of them, say ref for the wind site. Then they have the data product ID. And you can see, then there's actually a sub folder for each of those three months that we downloaded. So September 2019 October 2019 and November 2019. So basically we have a sub folder for each of the sites and each of the months that we downloaded. And if we look at what's inside one of those sub folders, there's a bunch of files. So I just opened the 2019 September file for Woodward there. You can see a set of files that have this this one minute designation and a set of files that have a 30 minute designation. So you're getting separate files for the one minute averages and the 30 minute averages. So I'm going to do this a little bit in a minute, but the reason that there's multiple sets of 30 minute and one minute files is because those are coming from different sensors on different tower heights of the micrometeorological tower. And then these files here are metadata files. So they, they provide a little bit of information about what's in these files here and in the case of the sensor positions file, where each of those sensors is on the tower and and on the globe. But as you can see, this is, this is not ideal. Right. So we have we have six sub folders, and each of those sub folders contains files for one minute, 30 minutes and multiple tower heights. So that is a lot of files for just the two sites and the three months of data that we downloaded. So that is exactly the problem that we are going to solve using the neon utilities package in our that package is going to take this large number of files and merge them into something that's a little bit more convenient to work with. So, you know, leave that and move to arm. Okay. So, at this point, if everyone can follow along in our studio that would be great, or are if you don't use our studio although I imagine most of you will be using our studio. But if you're, if you're not familiar with this environment. Just briefly, we, we type commands up here and either hit run or hit command enter or control enter if you're on Windows, and then the commands will actually be run down here. Variables appear over here and information like plots and help files will appear over here. Okay, so the first thing that we're going to do is load the necessary packages that we need. Oh, sorry, meant to say what we're going to be doing at this point. And from here on, we are going to be following a specific tutorial. So that is linked here from the workshop schedule, this link here, the download and explore neon data tutorial. So I'm going to be following this tutorial. So that's what we're going to do next. I am going to be doing it in a slightly different order than the way it appears in this tutorial. The tutorial kind of laid out to download a bunch of stuff and then work through each of them. Instead, I'm going to, you know, download one type of data show you how to work with that type of data and then move on to the next. So the goal of this tutorial basically is just to give you enough familiarity to, you know, explore a specific data set within the sensor data, the observational data and their sensing data, and then just make a simple figure. You know, we're not going to be doing anything scientifically sophisticated here. The idea is just to get familiar and to understand the tools that are available to help you work with these data. Okay, coming back to loading packages. So, first, we need neon utilities, and then the other package that we will need for this particular tutorial is the raster package. So neon utilities is a package that was developed specifically for working with neon data that has tools that are directly for neon data. I am actually the maintainer of the neon utilities package. The raster package on the other hand is, you know, unaffiliated with neon, it's just a package that's out there in the art community for working with our with raster data in general. So it's useful for working with neon remote sensing data because a lot of those data products are available as rasters, but the raster package is not neon specific. Okay, so loading those two packages. If you have any trouble loading those packages probably that indicates that there was something wrong with the installation. And so, if you have any trouble with the installation, you can try rerunning that with the install packages function. Okay, what we're going to do first is just take that par file that we all downloaded and do that merge that I just described. So the way we're going to do that is using a function called stack by table, which is in the neon utilities package. And so again if you're not familiar, or you haven't used our studio a lot this is a really handy thing that our studio will do for you. So if you start typing, it will try to, you know, infer what you're what you're headed toward, and try to fill in what comes next. So, if I type stack by it says hey there's this function called stack by table that is in the package neon utilities, and then this little yellow box actually shows the inputs that I might want to put into that function. So this can be really helpful. It also has this nice feature called tab complete, which is at this point, if I have stack by and I hit the tab button, it will just finish that out. So basically by hitting the tab button I'm saying, yes, stack by table is what I want, you know, finish typing that for me. The only required input to stack by table is the file path to the location where you saved that file that we downloaded. So there are a few other inputs that are options that we're not going to go into here, because we are just going to put in a file path. In quotes, this part is going to be different because are different for each of you because what you need to type is just the file path to the location where you saved that file on your computer. So, in my case, I saved it on the desktop. I can actually use tab complete here as well. So if I hit tab at this point, it will give me the list of files that are, you know, in the location that I've specified, so that then I can just select that neon par dot zip. So particularly, those of you who are working on windows file paths tend to look a little bit different on windows. You know, speak up in the chat if you're having any trouble, but hopefully, you've all been able to construct that file path to the place where you save that zip file. So now, I hit command enter. And it starts thinking, and it'll show this little progress bar while it does its thing. So it's trying to stack a table of one minute data. So it shows. So thinking. All right. So, if you were watching you saw that the 30 minute data went much faster than the one minute data. That makes sense because those data files are 30 times smaller. So, I got this sort of report that it stacked both tables and a little bit of information about what else it did and how long it took so seems like everything was successful. Hopefully, that all worked on your machines as well. Now, let's take a look at what I have on my desktop now, because sorry just coming back to our first second. If you notice my environment is still empty, I haven't actually brought anything into the our environment yet. Instead, I now have this neon par folder. It still has all of those documents. So the documentation about the data product that we downloaded, but now instead of the sub folders that we had before, we just have this one stacked files folder. We open that up. Now instead of multiple files for different heights on the tower, we just have one file for the one minute data, one file for the 30 minute data, and then the metadata files. So, let's take a quick look at that 30 minute file. Just to see what we have. Okay, so what we have here is a file with, let's start with these two columns. Start time and end time. That is the start and the end of the averaging interval. So the start and end of the half hour whose data have been averaged into each record. And then we have the data values. So, starting here par mean our minimum par maximum variants, etc. If we scroll out to the right, we start to get to quality flags. We have to see here because everything we can see has been passing its quality flags. So those, those zeros indicate that, you know, the quality flag is acceptable like the flag hasn't been raised. And then the 100s indicate that 100% of the data that went into calculating this record past their quality flags. So here on the left, we have the information that was in those file names before we did this stack by table step. So what we have here is these data come from Woodward, and they come from vertical position 10 on the tower which which actually means one it's the first tower position off of the ground. So if we scroll down. You can see, okay, here's the data from Wind River from the seventh position off of the ground. And so forth. So basically it took each of those individual files that were in that initial package that we downloaded and just stacked them up vertically. Made one much longer file out of all of them. If you want to know more about, you know, you're looking at this and you go okay par mean par minimum that seems fairly obvious, but like what it what is this, you know, par range fail. So there is this file that starts with variables. And let's take a look at that. So this file is where you can find the definitions for each of those column headers that are contained in the actual data file. So here you see okay par mean arithmetic mean of photosynthetically active radiation should be pretty clear. And you know more information. There's going to be longer descriptions that are that are a little bit, you know, less obvious for these these quality metrics. And you'll also see. There are a subset of these that are listed as having been appended by stack by table. So that's these site IDs and tower positions. That we, you know, merged together. So, you know, just explaining what those are on the file. So this variables file is it's a good place to go. If you just want to know okay, you know, what is the basic information about exactly what each of those data fields is that I've downloaded. So, let's go back to our, I'm seeing in the chat that a few people are having trouble. With the zip file, or with the file path. My, my sort of number one suggestion if you're having that problem is to try using that tab complete that I showed, because that will, you know, keep you from entering any typos, basically, because you know the tab does it for you. The other possibility. I'm seeing, I'm seeing someone saying that they have used this function before. If there's any chance that you used it with an older version of neon utilities you may need to update the version of the on utilities on your computer. So we had a pretty large release at the beginning of the year. So release of data and release of new functionality. So, if you're trying to use this with any of the neon utilities versions that start with one, it's not going to work. You have to be using the on utilities version two or higher. So, so give that that reinstallation a try and let me know if that doesn't work. Okay. So, what we're going to do now is take that par file that we have just know merged but that is still in a local file and we are going to read those data into our file. So, you could do that with, you know, any function in our that that reads in tabular data, but there is actually a function in the neon utilities package that takes advantage of the information in the variables file to do a couple of other steps for you. So, basically, it's, it takes this information and says, Oh, hey, start date time and end date timer dates. Great, I will do the date conversion for you on ingest. So, that function is called read table neon, and it has two inputs. One is the file path to the specific data table that you want to ingest. And then the second input is the file path to the variables file, because it needs that variables file to do the interpretation that it's going to do. So, in part, I'm going to go into the stacked files folder. And I'm going to choose the 30 minute data. So at this point, we're just going to read in the 30 minute data. And then the second input. I can actually just copy and paste that line. And this time, select the variables file. So, and now I need to be giving this a name. So, now since we are actually reading something into the our environment, we need to give it a variable name. So I'm calling it par 30, because this is the 30 minute par table. So, okay. So, while everyone catches up with that to be able to read in that file. Let me see if I can figure out what the problem might be with not being able to identify the file. So, the last problem, can you type into the chat, the exact line of code that you entered into our. So not just the error message, but what you typed in that resulted with that. Yes, so one person had trouble with file path being too long. That is a major problem on windows you can't have a file path that's longer than 260 characters. So if that's the problem then yes move it to somewhere that's going to have a shorter file path. Okay, so hopefully everyone now has this par 30 object read in. So now, we can take a look at what we've got. So, you par 30 and here we have the same thing that we just saw in Excel. We just loaded that same thing. You'll see if you if you look at both of those that start date time and end date time formatting looks slightly different and that's because that time conversion has been applied when it was read into our. Okay. So, what we can do now is make a little plot. So we can take those data that we just read in. And we're going to, let's see, we're going to plot par mean as a function of start date time, and the data that we are going to use to do that is the par 30 file, which we're going to subset to. To just top of the tower. So we want vertical position equals 080. Make sure that that is actually how that got written yes. And we are going to make it a line plot like that correctly. And I think my plot space is going to need a little bit more room and hey look at that. So, this is basically a different version of the plot that we saw on that. So we have a data visualization tool on the neon data portal, where, you know, sun comes up and sun goes down every day. And you can actually see here, as you move from, you know, September through November. The days get shorter. And so, well, not so much the days getting shorter you can't really see that on this plot but, you know, the sun angle shifts. And you get less radiation each day. So, cool. Basically, we've we've demonstrated that, you know, the data that we downloaded our impact measuring radiation. And we can, we can make a plot of them. Okay. So, what we want to do now is move to another data product. We're going to try an observational data product this time. So from from a different data subsystem. And the way that we're going to download this next set of data is also going to get us away from these file path problems. So, what we did to work with these par data was we basically did three steps. We downloaded the data from the data portal. We used stack by table to merge the files. And then we used read table neon to load those data into the our environment. So me and utilities also has a function that can do all three of those steps at once, or not at once but, you know, all in order with a single function call. So it uses the API to do the download, and then it runs the functions of stack by table and read table neon. So you only have to write one statement to read the data to get the data and read them into our. And so we're going to use that for this next bit. So, So the first thing before we get to the next data product. I think it's nice to sort of show what we would have done to get the data that we already worked with through this method. So, I'm going to run this line but I'm going to type it out for you. So the function that we would use is called load by product. And you can see again here that's in the neon utilities package. The inputs that we would need are the data product ID. So if you remember from when we were on the data portal that data product ID is right here. So in this case that the DP 1.00024.001. So that is our first input. And then we need the same criteria that we used on the data portal to subset to the specific data that we want. So, we need to specify the sites. So in this case, what we downloaded was Woodworth and Woodward, I think I've been saying Woodworth and I think it's Woodward. And Wind River. And we also need a start date, which in our case was 2019.09. And an end date, which is 2019.11. So that is what we would have needed to do to get the par data that we were just working with. So if you're, if you're having trouble with the file paths, give that a try. But what I'm going to do actually is, is move on to an observational data product so that we can start taking a look at what those look like, as opposed to the sensor data. So what we are going to use as our example. Data product from the observational data system is aquatic plant chemistry. So first thing that we need to do. So we're going to call that AP Chem. And we're going to use the load by product function. The first thing we need is that data product ID. So, if we go back to the data portal. And filter instead of looking for par, you're now looking for quite plant chemical properties, which conveniently is the first thing that shows up if I start typing aquatic. And we can see right here, the data product ID for that is DP1.20063.001. So that is what we're going to use here. And then we're going to pick some sites. So, in this case, we're going to download data from three sites. We're going to use prairie lake. And then we're going to use this site, which is just Sugg and Tulik, which is took. So that is, I believe that those lakes are in Florida, somewhere in the upper Midwest for prairie lake and Alaska for Tulik Lake. So in this case, we are actually not going to use. Thanks, Marie. It's North Dakota where prairie lake is. In this case, we are actually not going to specify a start date and an end date. If you leave those empty in the load by product statement, it will just download the data for all time. So we don't do that for sensor data because the sensor data tend to be so large and that would involve downloading so much data. The observational data for the most part are much smaller because of just sort of physical limitations. We just can't, you know, humans are never going to be out there measuring things as frequently as a streaming sensor is. And so they're just not going to generate the same data volume. So we do specify the data package, which made me realize I forgot to include that up here in terms of what we needed to recreate what we did on the portal we did need to specify the expanded package. If you don't include anything for the package load by product will default to the basic package. So if you want the expanded package, you want to specify that. So we should be good to go with that having specified the data product the sites and the data package. Okay, so here we see the first thing it does is it says, you know, I had to go find the available files. And so it gave a progress bar for that it did that fairly quickly. And then just like the data portal, it tells us how big the download is going to be. And again going back to observational data being much smaller than sensor data. It's bigger than two megabytes. And so here to say, you know, do you do you want to go ahead and download this. We just type why for yes. And off we go. So, we're going to get some more progress bars. That tells us that it is downloading. All right, so then it did the same stacking step that it did with the portal data. It's just all those progress bars went by very quickly because there isn't a ton of data it didn't take a ton of memory to run through all of these. And so here it's just, you know, reporting that it did all of these things successfully. And, you know, telling us at the end that it did everything very fast. Okay, so, hopefully, that is working on your computers. It's likely that it's probably likely to take a little bit longer on your machines than mine, just because I am on the internal neon network. And that makes things a little bit faster. So, what we can see is we now have this AP chem object in our environment that we downloaded. And that object is actually a list. So, if you remember from what we what we got from par data, we look back at this. There are probably, you know, five different tables here of data and metadata. And so, when load by product, you know, download stacks and loads all of that. It includes all of those data and metadata files. And so those actually are in a named list. And that's the object that we've created. So it actually has multiple data tables in it. And so you can see that if you just look at the names of that AP chem object. So let's, let's start at the bottom of the list for those names. We have a variable file, just like we had for par data. We have a couple of other metadata files. And then this first set of files that start with the APL underscore, those are the actual data files. And so let's take a look at one of those. I'm typing down there and I want to type somewhere where it will be preserved. View AP chem. And let's take a look at the external lab data per sample table. Okay, so, so before I get into these details. Let's just look back at this list of tables. So, in the sensor data in the par data, we had two tables, one was at a one minute averaging interval, and one was at a 30 minute averaging interval. And so that's, you know, fairly straightforward right it's the same data it's just been, you know, summarized to a different degree. It's different in the observational data products. So, typically, for the observational data products, what you get is data tables that are associated with a particular like sampling activity, or process. So, in this case, you know, you would, if you were interested in working with this data product, probably the first thing you would want to do is read the data product user guide that would walk you through. And here's what they do when they sample for aquatic plant chemistry, you know, they, they harvest aquatic plants, they, you know, do some level of identification, they dry them, they send them out for analysis. And those are different activities. And you can kind of see that in these table names that there's, there's a clip harvest, there's a biomass, and then there's the, you know, analysis at a lab which is which is where the chemical analysis is happening. So, you get different information about the same protocol in different data tables. And in some cases, you need to, you know, bring those tables together, do some table joins to get all of the information that you want all in one place. And so, that's what we're going to be walking through now. So, if we look at this external lab data table, you see, okay, you know, similar to the sensor data, there's some information about the site, where the sampling happened the date when the sampling happened. Now there's, there's a sample ID. And that is, you know, not too surprising there's something to identify the physical sample that they collected. There's indication of where the analysis happened, you know, at what lab. Okay, scrolling over a little bit farther there's some stuff that apparently wasn't relevant for these records. And then we get to, okay, what was the analyte. And then nitrogen carbon nitrogen, and then what was the actual value for that analyte. So you can see, you know, this, this is kind of the heart of the matter this is where you get the actual data for the carbon and nitrogen content of these plants. And this, the rest of this information is, you know, you can. There's always debates about, you know, exactly what is data versus metadata but this is all information that you need to be able to interpret, you know, these carbon and nitrogen numbers. And again, like with the sensor data, the variables file is going to be really useful in understanding this. So, what you're going to see here in the variables file is you have information about the fields that are present in each of these tables. So, you know, the biomass table has information about samples a chem sub sample ID a scientific name. The clip harvest has a lot of geographic information. And so this, you know, again, to get the really thorough information you would want to look at the user guide, but you can start to get a sense just from the variables of, you know, what's there in each table, and what you're going to learn from each table. And it can be a little bit annoying to work with data as a named list and have to keep, you know, pulling everything from this AP chem object. I tend to work that way myself and just keep everything in the list. But there is also a handy function called list to end, which is basically list to environment. Where we can take the AP chem object and just write everything in it out to the global environment. So if we do that, you can see over here. Now we have individual objects in the environment for each of those data tables from the AP chem object. So that can make things a little bit easier to work with. So what we're going to do now is just take a look at some of the numbers from the chemical analyses of these plants. So I have been slacking off on writing comments explaining what we're doing. We're going to make a box box plot of analyte concentration, which we saw over here is where the actual data live. And let's just take a look at analyte concentration as a function of site ID. And the data in this case are the APL plant external lab data per sample. So we are going to look just at a specific analyte. So we're going to subset set to those data where the analyte is D 13 C. So this is where we're looking at the stable isotope ratio of carbon 13 plants across these three sites. I will just put some friendly access labels on there so we know what we're looking at. And so here we see the range of 13 C isotope ratio ratios across those three sites. And we can see that, you know, suds and to look are fairly similar, a little bit more of a spread to look. And Prairie Lake is, you know, a little bit more enriched and 13 C relative to the other two lakes. So that's interesting. And probably the very next thing that you're thinking is, okay, you know, odds are good that that is related to what species are found there. You know, these, these kinds of isotope ratio numbers are, you know, sort of loosely associated with with what you would expect from C three and C four plants, it's a little bit different in aquatic systems but, you know, sort of as a starting hypothesis so Okay, let's, let's look at what species. But if you remember from looking at the variables file. If you look at what we have in the external lab data per sample table. We don't actually have a species identification in that table, the species identification, which is this scientific name field. That is in this biomass table. And so, in order to see what the distribution of carbon isotope ratios looks like across species, we're going to have to do a join between those two tables to bring that information together. So, just write a quick note. So we're going to define a new object. And that object is going to be created by a merge between a PL biomass and a PL planet external lab data per sample. And we have to tell the merge function, which columns of data to use to do the merge. So, if you go back to the variables file. You'll see there's this sample ID in a PL biomass. And there is also a sample ID in the planet external lab data. It is, you know, probably cheating a little bit for me to tell you that that is definitely the right column to merge on. In general, if you're sort of, you know, working with a new and unfamiliar observational data set and you, you know, see that there's data in two different tables that you need to bring together. Again, the data product user guide is going to be your best guide to figuring out what is the appropriate joining variable. There's a section in each data product user guide called data relationships that walks through, you know, what the sort of the joining variables the key variables are in each of the tables. So we're going to join on sample ID. We are also going to include name location, domain ID and site ID. Those are not actually necessary for the join. They're, they're going to be redundant with sample ID. I just like to include them in the join because that keeps us from ending up with duplicates of those columns. Basically, if you don't include them in this statement here, you end up with copies where, you know, one is from each table because each table brings in a domain ID and a site ID and name location. So that that can just be a nice thing to do when when you know that there's information that's identical in both tables. All right, we run that merge something else that I always find really useful when I do emerge is just to take a look at this little environment area over here to make sure that nothing looks too crazy. So, if you look at this, you can see, okay, the plan external lab data per sample had 458 rows in that table, the biomass table had 218 rows, and now our merge table has 438 rows. So, that is not like definitive that everything is fine, but usually if you if you do emerge and something goes terribly wrong. Usually what happens is either you end up with zero rows, because it just something blew up, or you end up with a totally unrealistic number of rows like the, you know, whatever you would get from 218 times 458. So that's sort of the the other most common way that merges go is it just duplicates everything and you end up with a row for every combination of possible rows. So that didn't happen here. That's a good sign. Now, we should be able to plot 13 C as a function of of species, because now we have them both in the same table. I'm going to make a box plot again. Analyte concentration. So that has not changed, but now we're doing it as a function of scientific name. Data is that new a PCT object that merged table that we created. And we still need to do the same subsetting to Analyte equals D 13 C. And to make why access label. Let's see what that looks like. Okay, so just because of the size are is emitting a number of the species names. So I use modifications that are in the tutorial. That should help. So this is just modifying the figure a little bit to change the direction of the names and make them smaller. So we can actually see what each of these are. Sorry, now I've covered up code. So we can see that this is, you know, basically what we expected. And what my sort of off the cuff hypothesis was that the plants that are relatively depleted in 13 C are, you know, just a couple of species. And the rest of the species are, you know, down here more depleted like we would expect from your average C3 plant. Okay, so that is kind of the basics of how to use the stacking functions in neon utilities. The downloading and very basic navigation of sensor data and observational data. So the next thing we're going to move on to is neon utilities to download remote sensing data. So before I go to that, are there any questions about what we did so far, you know, how, how neon utilities worked. Any, anybody having problems with the code that that we haven't been able to address in the chat. Let's try downloading some remote sensing data. So to do this, there are two functions in neon utilities. They are by file AOP and by tile AOP. So this and I should say don't run those. I mean, nothing bad will happen if you do it's just nothing will happen at all they won't work properly with no inputs. I just wanted to talk a little bit about the distinction between those two and why they both exist. So by file AOP will let you download everything for a given data product site and year from the remote sensing. So, you know, that's useful if you want say, you know, all of the NDVI data, all of the camera imagery for a given site for a different for a given plate event. But like we've talked about several times that can be a ton of data, just like a really huge amount. And so, you don't always want to do that. So we're, it came up in the chat earlier. There are remote sensing data products that are mosaic and then broken into tiles. So each tile for all of those products is one kilometer by one kilometer. Which means that, you know, one tile is not an overwhelming amount of data. And so the by tile AOP function is there so that if if you only need the data, you know, that are that are associated with, you know, one tile or a handful of tile. This function gives you a way to just download those. And actually if you go through the download steps for AOP data on the data portal. It has that capability as well to let you download just some of the data that or, you know, just some of the data that's available for a given site and a given plate event. And especially useful if, you know, say you are using neon data that were collected on the ground, and you want to connect those data to the overflight. So, you know, the neon tower is just, you know, one little spot in a site. And so, you know, maybe you just want the tile, where that where the tower is, or, you know, maybe you just want the flux footprint, you know, at most that's going to be a couple of tiles. Maybe you want everywhere where where vegetation was sampled that's, I mean, I'm guessing a little bit but probably the maximum would be like a dozen tiles. Whereas the the full flight box is usually about 10 kilometers by 10 kilometers. So you can really cut down the amount of data that you have to download if you just focus on the tiles that you want. And I don't know, we might talk about that a little bit tomorrow when we talk about working with geolocation data. And then they are definitely going to talk about that in the lidar breakout session, because that breakout session includes tying the lidar data to tree measurements on the ground. So that is going to include and the tutorial for that for those of you who aren't attending that breakout session can walk you through how to connect those that ground sampling to the tiles from the remote sensing data. Okay, but what we're going to do here is, we are just going to download a single tile that is predetermined. predetermined in this tutorial. So, we're going to use by tile AOP to to download a tile that you know we just came up with when we wrote the tutorial. Okay, so, same as load by product, the first input that we need to put in to by tile AOP is the data product ID. In this case, what we are going to download is the ecosystem structure data product, which contains the canopy height model. So this is, so this is an estimate of the height of the top of the canopy that is calculated from the lidar data. So what we see here, it's data product ID is DP 3.30015.001. So that is what we're going to need as an input need the site, we are going to use wind river experimental forest again. One thing to know about by tile AOP is, so this is different from load by product in that you can't put in multiple sites for the remote sensing downloads, you can only handle one site at a time. Instead of using a start and end date, we just put in a year. So we're going to download 2017. And then to specify the tile, you put in an Easting and a Northing. So in this case, Easting is 580,000. Northing is 380,000. And then in this case, so we are downloading, but we are not reading directly into R in this case. So we need to put in an input of where do we want this tile that we're downloading to be saved to. And so that input is the save path. In this case, I'm just going to save this to my desktop, the same way that I did with the power data. So apologies that we are we are back in the land where file path challenges are a possibility. So put in a file path that's going to work for you. For those of you on Windows, just a warning that the way that these data are saved. There's a very specific nested structure to the folders that gets pretty long. So if you can save this somewhere fairly close to the root directory. That is definitely going to help you out. So, okay. So, just like load by product this tells me, okay I'm going to download six files. They're going to be about four megabytes. Do I want to go ahead with that. Yes. And here it goes downloading. And so, in this case, the output that you get from this function it actually tells you the exact file path to where it saved each of the files that it downloaded. And if we go and look at that. Here, it's created a folder with the name of that's just named by the data product ID. If we open that up. This is where you'll see what I was just talking about in terms of the very nested file structure. There's a folder for the year full site domain. And then level the screen light are can be height model. And we finally get to this to file. So it's pretty deeply nested and you can, you know, also see that from the file path here. So we're getting an error that no tiles were found. Yeah, my, my best guess is a typo, especially with all those zeros. I get it wrong pretty often. So, let me know if that doesn't work. Okay, so at this point. We are actually going to, if you remember from back at the beginning, when we loaded packages, we loaded neon utilities but we also loaded the raster package. So at this point, we are actually going to move to using that raster package. So using a package that isn't specifically designed for neon data, but just for raster data in general. And that is what we're going to use to load this remote sensing data into our. So, we are going to name this object CHM for canopy height model. And there is a function in the raster package that is also called raster. That's what we're going to use. That is the function that can load the tile into our. And so now what we need to type into that raster function is that extremely long file path. So, a couple of options. You can try just copying and pasting this message from the function output, where the function, you know, told you where it saved the file. You can also use the tab complete. Okay, go to my desktop. And then to the data product, basically I can just keep hitting tab and enter to get through all of these folders. When I get to here where I actually have a choice, I want the, the L three folder. And then I'm going to add our, this is going to spill off the page. I want the canopy height model tip. And finally, one last entry, I finally get to the dot to file. And that can be an indicator in terms of, you know, whether you're getting the file path right if you don't eventually end up somewhere with the dot to, then you have not found it. Okay, folks getting lost in the chat. So, what we're doing here is just getting the file path to that to file that we downloaded that single tile. So I just did that with the tab complete. I can do the same thing. And hopefully, this option might be a little bit easier. I can come here and say, I'm just going to copy and paste because by tile AOP. One of the outputs that it just, you know, returns into the console is the location where it saved everything. So I should be able to just copy and paste that file path into where I need it. I copied that. I paste it. And that will get me to, you know, one step short of the dot to file so at that point if I put a slash and then a tab, I should get back to the same file path. Okay, could not find function raster. So that raster package didn't load correctly. So try try running install packages raster again, and then library raster. And if either of those has an error, then we'll have to look at those. Let me just run that. That is actually just complaining about my little graphics window here. Those, those warning messages are not actually about the raster function that you're seeing on my screen. Someone else is getting an error message that they can't create a raster layer object from this file. Probably that means that there's something wrong with the file path. That means that's, that's the most likely scenario. So give the file path another try. Okay, but if, if you have been able to read in and create the CHM object. We should now be able to make a little plot canopy height in this tile. And that actually all we need to do is type plot CHM and the raster package will take care of the rest of it for us. So there we go. So what we have here is an image of this particular tile where the color is an indication of can't height. So the greens are the tallest trees and these, you know, pale pinks are the shortest trees. We're at Wind River in Southwest Washington state. So, you know, the tallest trees are over 60 meters. But there's also, you know, some sort of clear cut that you can see in this image where the trees are really short. So, you know, I like this one as an example, because at an awful lot of sites and an awful lot of places if you look at the canopy height model it's just kind of, you know, hey there's a closed canopy and it's, you know, about 20 meters tall. It's not that exciting, but there's actually, you know, some variety here with some very tall trees and some very short trees. Okay, so with our with our last few minutes. I want to show you a few tutorials that were that we're not going to all go over together, but that are sort of useful next steps that you might take or you know different directions that you might take after this download and explore tutorial. So, if we go to go to the tutorials page. The first thing I want to show you is the API token tutorial. So, this is, like I mentioned the, the download functions in neon utilities are accessing data from the neon API. There is an option in the API to include a token, which is basically just it's like an identifier associated with your user account that says, you know, this is the person who is accessing stuff from the API. It's obviously not mandatory, everything that we just did we did without tokens. But it is, it is useful to us on the neon end and it is useful to you to do this so it's, it's useful on the neon end because it lets us get a better, a better sense of how many users we have to have their downloads associated with tokens. It gives us more information about, you know, hey, this specific user, you know downloaded these six data products together, like is, is there, you know, sort of a unity among those data products that's important for us to know about all kinds of things that we can learn from having that information about who's downloading what. And then it's useful on your end, because it will make your downloads go faster. So, the way that the API works. There is a rate limit on accessing the API. That is basically, if you're if you're familiar with in the, in the tech industry, they occasionally run into things called denial of service attacks. Basically where malicious hackers set up something to like keep making requests over and over again, like totally disable the, you know, whatever the services. We were running into situations where people were trying to download like all of the remote sensing data, and it was basically like they were accidentally perpetrating a denial of service attack on neon. So the rate limit stops that from happening. Because it, you know, it throttles the number of requests that can be made at once. And if you have a token, you get a higher rate limit. So that will that will make your downloads faster. So, check this out. If this tutorial out if you are are up for doing that and basically all that it changes if you set up a token, all that really changes is if you are say running load by product, then you add another input for token equals blah, the tutorial explains because the token is extremely long. The tutorial explains some methods for like saving that token in someplace accessible so that then you can just pull it up every time you run the function without having to type in this super long thing. Okay, so that is one useful tutorial that I wanted to point out. Another one is, we do have a tutorial for using the neon utilities package from Python. And if you are, you know, a dedicated Python user who is just putting up with our for this workshop. This is a good option so there's this package in Python called our pie to that lets you basically create an our environment within Python. So, this tutorial walks you through using that environment within Python, like just as much as you need to access the data, and then reading the data into Python directly so that you know, then from there you can proceed to do whatever you want to do in Python, and only use neon utilities for the specific features that only neon utilities has. And then, finally, I know, you know, people have have already signed up for the breakout sessions. So for those of you who are signed up for the API breakout, we are going to get into sort of the, the back end details of okay, you know, neon utilities uses the API to do these things. Like what it what is it actually doing and what what does it look like to access the API directly, instead of, you know, using something like neon utilities as an intermediary. Even if you're not signed up for that breakout session. This tutorial is available. If you're interested in that sort of more nitty gritty getting getting at the guts of what's happening in the API, and that tutorial is based in our, but a lot of a lot of what matters, in terms of how you use the neon API is just constructing the API calls, which, like that's that's applicable to any language. And so there's a lot in this tutorial that I think is relevant even if you're not working in our are just happens to be the space where it's being applied in this tutorial.