 Hello everyone. I hope you had a good lunch and and we are going to start our lab session. It's let's start. So this is a Creative Common Agreement and we are going to do our to our lab session. I'm going to just briefly go over the overall what's arrangement. So so we are going to have four different tasks and one is going to be a simple or one factor statistical analysis. And another one is raw stack processing. Then functional analysis basically including the target and untarget. So basically our two hours is dedicated to this. I'm going to show the timeline. And the last one is if someone is really advanced and want to explore you can explore the complex metadata stuff. We don't have a demo for the last one because it's advanced also we have the we have the tutorials. So here's a breakdown of the four tasks. So first one we are going to be one factor and this is about 45 minutes. Basically it's a 15 minutes followed by half an hour demo, half an hour of practice. Then we are going to focus on a targeted data analysis. Then there's the spectral processing on our target. So this is almost everyone is starting with the demo and followed by hands on data analysis. So we are all going to be synchronized and all do the same thing. So the demo going to be shared with everyone. And you're going to do hands on practice with some questions. Okay. And before we start, I just briefly introduce the main framework for Metabal Analyst. So make sure you have some feelings and why something is doing in that way. So Metabal Analyst is designed to be scalable and we do separate the raw data processing from the web-based interactive statistical analysis and cloud and browser. So basically raw data processing in a cluster local and web application is using the cloud and server side and visualization on the browser. And we spend a lot of time offloading that question to your brain. So basically we want you to think and we found that it's kind of intermediate, not overwhelming, actually get people learning and feels engaging. So overall that we're using all the things we can mobilize and to try to reduce the computing and increase the learning. And Metabal Analyst, we actually try to balance those targets on target. So we're talking about the statistical from this morning. So the version one is statistical. We continue to enhance throughout all the version till version five. Statistical analysis mainly is neutral. So let's just need a data table. And version two or three basically targeted and we just to do how to understand the functions and biomarkers. And version four and five is mainly on target. We really want to get the target, untargeted both can be done in a coherent framework. And currently we are working actually MS too. Overall we also want to do an annotation on target on target hybrid or semi annotated and more or less get the same conclusion all using the same workflow. And Metabal Analyst is actually working very well and you can see this is a last year. Yeah, we have almost every day we have close to three to 5,000 users. And a lot of them through repetitive users, we have a lot of the job analyzed. So it is well maintained. And today's topic we're going to use this four modules. One is raw data section one is under single factor statistical analysis. And then we are going to move on how do we do the functional analysis. So all of this will be using data we generally yesterday. So section three and four will be on on target. So we are going to use a new data. So everyone of this will you're going to have the link either using a GitHub or Slack. You should just see that even from slides. Okay, we're going to demo for that. And this is what I mentioned about protocols. We do have very detailed protocol covering all the modules about 12 modules. And here's the more latest nature protocol on the on target. And with a metadata complex metadata. So we are not going to do a demo on it. But if you're interested in that, and just follow the protocol. And if you have questions, we are more than happy to answer. And before you start, I would like to mention is do not open multiple tabs. And if you do that, you're going to overwrite your result in each other. The reasons that each browser have a session and relate to your current state, and you're up, you're going to generate on that. So if you open a new tab, and from another one, they're going to override this one. So the also another reason you don't do multitasking folks on your current work, most of the job will almost return within one minute. So you just read with a bit, you'll get a result. So interactive folks on your task to not multitasking, open multiple tabs, which you're going to lose a clue. And you're going to see a pretty good result. Another one I haven't mentioned is the omics forum. And you can use Slack, which is more or less for this workshop. But if you have some questions, you would like to post on the forum, it's going to seem by everybody. And when we post the result and going to be benefit everybody, okay, this is the omics forum, you can register with any of your emails using your real name or fake name is all fine. Okay, I'm done. Next one is Jessica going to give you a demo on the statistics analysis also tasks. And you're going to finish. Yeah. Okay, so we're going to do the statistical analysis one factor. I think we're supposed to be done this section by 345. And also we want to try and analyze all three datasets from yesterday. So I'll try to keep the demo brief. And then we'll see how far we get. So just so everybody's clear, once we start when you go to Metabo analyst, this is the module that will be using the statistical analysis one factor. So just to be clear there. And at this point, I'm going to just go directly into a demo. I'm going to show just the general single factor workflow in Metabo analyst. And I am just going to use built in example data right here. But when you do the lab, you're going to upload the data that we processed yesterday. So just try to remember generally what I'm doing. And then those protocols that Jeff linked, I think those are part of the pre reading. If you're really, if you're not sure about some like which buttons to push or whatever, those protocols have a lot of information on kind of exactly what each method is and how they work. And then you can also ask us the TAs if you're not sure. But in general, the defaults should be okay. Except for the normalization and I'll go through that. So I'm just going to open up the example data to show you what it looks like. Let's see where it is. Can you see the example data in Edmonton? I'm just going to keep continuing unless someone tells me they can't see it. So we always have our sample name in the first column. Then we have our single one factor categorical metadata in the second. And then we have all of our concentrations of each of the metabolites here. So each column is a metabolite and then all the values are in each of the cells. It can also be arranged the other way. So you can have the samples and columns and the metabolites in rows. And that's all fine. You just have to tell metabolite analyst what you're uploading. So I'm going to just and you can see that here, right? It's you just have to tell them what you have. So I'm going to leave the first one selected and I'll click submit here. Zhicheng just checked metabolite analyst and I think we have already 100 people using metabolite analyst outside of our workshop right now. So if things are a little bit slower, that's probably why. So if there are missing values in our data, this button right here would be it would be possible to click it and then there would be a whole bunch of different methods for imputing the missing values. This data set doesn't have any. So it's not available and we'll just click proceed. Okay. So again, I'm just going to show you on the left here on this panel. This is where you could go back and look at the missing values. Since we only have a few metabolites since it's targeted by default, we don't filter any out. But if you wanted to apply a filter, you would click this. It would take you to the data filter section if we had a thousand or more. And if you want to go back and filter your targeted data, you can click that. And if you want to exclude specific features or samples, you can click this data editor button. But those are sort of advanced options that most people don't use. So we go directly to the normalization page. There are three different kind of sections in the normalization page. We can normalize our samples. So this is sort of where we're trying to make the overall sample distributions comparable across our whole data set. There's the data transformation. The most common one is to apply a log transformation. And this is usually necessary so that your features are roughly normally distributed. And so I almost always select normalization by median and log transformation. We can normalize that and then look at it. And so you can see here before normalization, this is looking at a bunch of metabolites. All of these distributions are very right skew. And after we applied the log transformation, they're all approximately normally distributed. So this is what you want to look for. It's also always a good idea to click this view result just in case your data was already normalized before you uploaded it. Then it would be very clear because the before normalization would look like these nice box plots here. And we don't want to double log transform our data because that's just a little bit weird and harder to interpret. With the sample view, we like to see this that all the distributions are roughly comparable to each other across the whole data set. Okay. In this case, I haven't auto-scaled it, but you can. If you want to do PCA analysis, it's usually a good idea to auto-scale. If you want to preserve your actual concentration values that you worked so hard to quantify with your targeted assay, then if you auto-scale, you're losing that information in your results. So it's really a preference. Neither is wrong. It just depends what you're doing and what you want to use the results for. So I'll normalize this here and then proceed. Now we're at all of these statistical analysis tools. There are a lot here, so I don't have time to go through all of them. But the most common ones would be to perform the univariate statistical analysis like a t-test. It's a good idea to perform some dimension reduction like PCA analysis so that you can see like the overall distribution of samples. And if there are any outliers, what the main patterns are. And then there are lots of other options here for generating different heat maps and network diagrams. And all of the ones that Jeff covered in the analysis or the statistics module before. I usually start with PCA so that I can get an overview of the data. Then I usually do the univariate analysis and then I usually follow that by some of the more complicated heat maps and network diagrams. So I'm not going to go into any of these right now because we have so many data sets to analyze. I'm going to hope that you can just try and do this yourself. So just to go back to the slides quickly. When you are uploading your files, since you won't be using the example data set, this is really the critical starting point. So it'll be this first plain text file. You can leave all the defaults. So concentration selected, samples and rows, and then you'll choose the files that you download. The Metabol analyst URL is here. All of the data from yesterday are on the course Github page. So you can find that in the Slack channel if you go to the pinned section or if you just have that pulled up. There are three different data sets. I'd suggest you just choose a random one because I'm doubtful that everyone will get through all three. So if everyone chooses just a random one, we should have everybody touch each data set. For each one, you should just try and upload it, filter it and process the data. You can try out different normalization methods and see how that affects the distributions like I showed, how you can perform different methods and view the results afterwards. I suggest that you perform some dimension reduction like PCA and then do at least one statistical analysis like the t-test and make sure that you download the results because it's optional to use that in the next section when we do functional analysis. So when you're doing this, if you have experience with Metabol analysts, this is a pretty simple analysis. So you might get through all three. And in that case, just try and find which data set has the most differentially abundant metabolites. And when I analyzed all the data sets yesterday, I found that one of them appears to have an outlier sample. So if you can try and find that, that's a good goal for the lab. So I think everyone can start now and we will just go around and help and reconvene in half an hour. So from Montreal side, we just share some of the tips. We noticed that GCO2Fate, we have a lot of the missing values. So if you upload, there's a missing value amputation or estimation, you can click that missing value and use the default to replace with the lowest detection limit, basically one fifth. So after that step, and you will be able to visualize your data after normalization, because I think when you have a lot of missing values, it probably calling some issues for visualization in the normalization step. So that shouldn't change that PCA result. Okay, Edmondson and Montreal, I think it's time to move on to the next section. So I'll just give everyone a second to kind of stop what they're doing. I was going to show, I'm not sure how many people managed to analyze all of the data sets. Does anyone know which data set had the most differentially abundant metabolites versus the least? Did you take notes of that? Okay, that one had the most. Yeah. So people here are saying that for them, the sheep one had the most and I think the sleep apnea one had the least. And I think the LCMS lung cancer one was in between. At least that's how it was when I analyzed it last night. That slightly depends on the normalization methods that you use and things that can change a little bit. But I think the differences were pretty big. So it should be consistent. Yes, I am. Can they? Okay. Is there a note on the slap that they can't hear me? Okay. Okay. That's good. Yeah. The question which data set had an outlier sample, that one ended up being kind of a trick question. It had to do with the GCMS data. That one had a lot of missing values. And if you went in and selected different types of missing value imputation methods, and you did or didn't auto scale your data when you normalized it, you could have some outliers in the PCA. And that's sort of normal when you have missing values, because different approaches make different assumptions about why the values are missing and why they aren't. And auto scaling can really impact whether a few metabolites really influence the distribution of samples on the PCA scores plot. So if we had more time and maybe at the end, we can go through it and I can show you exactly how we can find that one outlier. It's not really a real outlier. It was like created by data analysis. So that's why it's important to look at the PCA plot, because it helps you see if everything went properly. And if you saw that outlier, you could go in more detail and look at the values and make sure that it's real and not just created by your processing method. The low one. Okay. So now we're moving on to the functional analysis of targeted data. Those are these two modules right here, the enrichment analysis and the pathway analysis that Jeff described earlier in the functional lecture. I'm going to exit out and go back and just kind of click through some of the modules quickly. We're only going to be spending 15 minutes here. So I think this is really more of an opportunity for you to play around with the different tools and then ask the TAs questions about things that you're unclear about how they work and we'll be monitoring Slack too here. If you had your results from last time, if you downloaded the statistical analysis results, you can copy paste the list of significant compounds from there and upload it to the modules or you can upload the whole tables through the concentration table upload. So I'm just going to show you using the example data in metabolism analyst right now. So again, these are the two modules right here, the enrichment analysis and the pathway analysis. So for both of them, you can either upload a list of features. So here in our example data, it looks like this or we can upload here. We can upload our whole table. So this would be uploading the same file that you were just working with. And so again, you're going to leave the categorical classification here. Compound names. We're looking at metabolites and not lipids and the samples are in rows. And then you should be able to choose the same tables and upload those. I'll show you what it looks like just with an example data here. You will have to do the processing again. This shows there's an additional step to show you how well the compounds map to the databases and metabolism analyst. This is important because if it was different and it doesn't have a match, then it'll be excluded from the analysis because we have no way of mapping it to all of the libraries inside of the tool. So here you can see it's the same same thing. And then proceed. And so then here's where you can choose the different databases. Right now, the simplest one here is to use all the compounds in the selected library. If you're uploading a list of compounds, it's a good idea to upload a reference metabolome based on your analytical platform. So I'm just going to show you what that looks like. It means uploading a file that's just a list of all the metabolites that you measured that are in the the kit that you used. When you're doing the concentration table, it doesn't matter so much because you're uploading the full table of results of what already knows. But if you're uploading just a list of compounds, including that background can help give you more accurate functional results. So I'm just going to show you that here. You can try get that working when you're doing it with the example or when you're uploading the lab data. But if you just want to get an idea of the libraries and quickly get a look at what the plots look like and how to interpret them, you can just leave the use all the compounds in selected libraries so you can get there faster. But for your own research, I'd recommend figuring this out with the reference metabolome. All right. And so then these are, these are some of the plots that you can get. There's a lot here. There should be some help tips and things. And there's lots of information in the protocols about all of these. But one thing I just wanted to highlight is this details here. This shows you right here is the list of all the metabolites that are in that pathway. And the ones that are in red are the ones that were in our data set. So this is really showing like how much coverage your targeted panel had, right? If there's several thousand metabolites in keg, for example, and there's only 100 metabolites in your targeted kit, this is a type of coverage we usually expect. So we're not going to have all pathway metabolites if we're only measuring like 100. So this can give you an idea sort of of how much you trust these pathway results. Like if a pathway has 200 metabolites, but you only have three or four in your data set, you should really take that into consideration when you're thinking about your significant results. It's less of a, it's less to think about if you have a much larger data set with more metabolites. And if you're used to gene expression data analysis like transcriptomics where you have a lot of very well annotated features, it's a different story. Okay. For the lab, you can again take your concentration tables, upload them here and try play around and then ask the TAs any questions to try and get it working. Just share some of the common questions from Montreal and also with Edmonton. So for functional analysis, we have enrichment analysis and a pathway analysis. So for the enrichment analysis module, we have the metabolite set library and most of them are based on human. So because it is related, a lot of this. Oh, hello everyone. And that's, I would like just to mention one thing. It's a common question from Montreal site. It's about the enrichment analysis library, which is the library is mainly collected with human studies. So if you are data is from the ship, and you can still use some pathway library because mammalian metabolism is similar. But as David mentioned, we need an organism-specific library and to be more accurate. But this is, if you want to try, you can try the enrichment set, especially the pathway set on the top. But if you're doing a pathway library, we actually have the more specific organisms based on the CAG and SNPDV. So you will have a more list. So if you have, if the organism is not there, you choose the clodest one. Okay. And over long term, if we do have a, you do have a pathway library there and you just email to us, we are able to add in that one. Okay. This is mainly designed for on targeted global metabolomics. So in this section, we are going to still use the metabolite analyst. You can use the same URL and go to the data from the building example one, you don't need to download it. You can try to use the link just beside the option. I can show you a little bit later and try to use it. So it's better to use the building example directly. So for raw on targeted metabolomics data, we are going to demo raw spectral data uploading and then integrity checking. And also do the data processing with the auto-optimized workflow and do some demo on the result visualization. So now let's go back to the website. So this is the metabolite analyst. For the LC-MS spectral processing, we are going to use the top module, the first one LC-MS spectral processing. Simply click it. We are going to this module. So basically we accepted the all open source raw spectral data format including MZML, MZXML, CDF, and MZdata. User need to zip it first and create a metadata for the raw spectral files and upload them together with the zip files. So they can click select and select everything together. Here in this demo, I'm going to show you with the building example. So this is a very simple quick demo on the data we are going to use. So we have around 10 samples. Each of them have four CD. This is a disease, corn disease. It's an IBD data phenotype and also how they control. We have two quality control QC and this is the metadata. So this on target metabolite data is very small. We are going to use it to do a quick demo. This is also for a learning purpose. Here we can directly click submit because we have already selected the first example. Click submit. We are going to submit the data. You can also download the data and use the upper select button to upload your data. But it's not recommended. You will be queued because we have a lot of users in our server. So we have to queue all jobs. But if you use the building example directly, you can get the highest priority to write. We can tolerate over hundreds of first example data to be run. So it's better to try the building example. If you truly have some interest or you have some data you're all in, you can try after this workshop. So this is the data integrity check page to show you if the data is centroid or not. How about the size and the group information based on their metadata. If it's not centroid, you can also do the online conversion from the button here. So we can click next directly. So here this is a parameter panel. A user could use a default one. If they truly have some feelings and knowledge on the parameters that they are going to use, they can use the default one and manually change the parameters. If they are not quite experienced, they can use the auto-optimized option. We are going to optimize the parameter for them. It will take a little bit longer than the default one, but it's super fast. Here I'm going to demo using this default one because it's fast. You can try to use the default or auto-optimized later to compare the difference. Or just for learning purpose, you can use the auto-optimized directly. So we can click the submit job. It's going to ask you to do a confirmation. This is the last chance so you can change the parameter. Once you click confirm, you cannot change it anymore. Here we have submitted the job. You can see the job is being panning. It's going to be showing running very quickly. So I guess we have a lot of jobs right now. Over 20 or 30 from out of our workshop. So it's going to take a little bit seconds. So we can just wait to see everything is working. Here we can see a lot of logs from our job. We just wait quite a few seconds because this is a building example which will be executed very fast. So when you are running your own data, you have an opportunity to create a job URL link here. You click this one. It will pop up a dialog. You can copy and save the URL because for the dot data processing, it usually takes several hours. You can come back later by using this URL. Here everything has been finished successfully. You can click proceed right now. Now we are going to see the result page from here. So here this is the result. We have a PC visualization on all features that we have detected. So this is a 3D PCA. We can see the different samples have been listed as the score plot and the different features that we can easily change to click this button to change it. This is a different feature that we have detected from the example. And we can have also other figures in intensity stats, retention time correction, figure, total ion chromatogram, base peak ions, and also aligned base peak ions. So we can compare the difference between them. So let's go back to the most important part, the PCA visualization part. Here these are the different features. We can double click the node here. So when we click a node, it's going to show these features in different samples. It will be displayed at the box plot. This box plot is interactive. You can double click the node in the box plot. So the specific extractor chromatography peaks will be plotted. You can interactively do this option. The different peaks will be plotted in this figure. So you can see we have two samples here. You can also add more. Or if you want to regenerate, you can just simply click this one. This will be cleaned and you can regenerate whatever samples you want to show. So here this is the background. You can change it to the black to visualize everything more clearly. And then you can change back to the score plot to visualize the sample here. This is the total ion chromatogram of the specific sample. If you show you the highest peak in the different regions. And let's go to the bottom to see the result summary. Here in this panel, we are going to show you the very brief summary on the results, how many peaks, how many features, and some basic parameters. And here we have to also provide detailed information on the samples you have uploaded and we have also processed. You can also click the button here to view the total chromatogram. That's going to be the same thing at the score plot. And also for the feature table, you can see the different features we have detected. So there are going to be the different annotations and the m over z. So the features are sorted by according to the p values based on the different groups. So we have also provided a putative ID based on the MS1 information. For example, this is the potential formula and the potential compound identification results. And also you could click the view button here to visualize the similar result at the loading plot above. So you can still click around to see the results. So this is the basic function. And then you can go to download page to download the results or start a new journey to other modules. So I think this is the time for you to explore. Hello everyone. We almost need to move to the next session. So we don't need to download the result from this raw spec processing. And this is just for demo. And there's dedicated files you can use for the next session for the untarget metabolomics for functional analysis. So for your purposes, when you upload the raw spec processing, not only for the question, not only you have the format and sample numbers, you can also upload some blank QCs to help you improve. And we do in 10 in the future, you can have MS2 if we generated it from for the sample to help you do some annotation. But overall that metabolomics is designed for high throughput data analysis and statistical analysis and annotation. Really you want to very competent identify. And there's always you need to have an internal standard. I think a high throughput, high quality component identification, you cannot really, it's more how to say best one is targeted approach. Okay. Do you want to start? Yes, sir. Okay, now let me just quickly answer these questions. So what's the data format for uploading? Yeah, it's a zipped file. It cannot be MZML. It's a valid file, but had to be zipped it first. And what's the maximum number of the spectral file allowed? Yeah, 200 samples. What's the maximum allowed single file size? Yeah, I don't mention you can answer, give me an answer, someone. No, I cannot hear anything. So I think I can continue. The maximum size after zipping is 200 megabytes. Okay. So here in my slides, there are a lot of workflows you can follow even after the workshop. So I'm going to skip all them, go to the next session, functional analysis of our target metabolomics demo. Here we're going to use this module, functional analysis. This is used for our targeted metabolomics. In this section, we're going to use the example two or example three. This is both of them are valid biological data sets. So you can still use the meta-banded websites and use the building example data two or example data three. In this section, we are going to understand the format and option for data upload first. And we are going to do the functional analysis based on the global peaks. And then we can practice a functional analysis based on the patterns of a key map. So this is the two questions I'm going to promote there. So here, let me go back to the website to do a quick demo. Okay, this is our website. Click here, to start. We are going to go to the functional analysis. This is the module we're going to use here. Now we can start with the, we have two options. The first option is the peak list profile. User could upload a simply a peak list. You can follow the different format according to the example data sets here. We can simply click one. We can see the data format is very simple. We have four columns or three columns here. We can have m over z. The second column can optionally be retention time. We can have third column or second column here is a p-value. And last column t-score is also optional. So user could easily follow the different format for the peak table at least here. Yes. Yeah, so the question for Montreal is asking about where the t-test and where the t-score and the p-value come from. So they should come from the t-test. So for the functional analysis, we are mainly focused on the difference of the functional between two groups. We are going to do a t-test on the all features in these two groups and generate a bunch of p-values and t-scores listed in the table. So that's where the p-value and t-score come from. The question is that many of the p-values are not significant over 2.3, 2.9, or even 2.99 or something. So why the p-value is like that? So for this, this is a nice interesting question. For the functional analysis, we are going to upload the complete feature table or feature list. All features need to be included in the file. So that means all features, it doesn't matter if the ticker is significant or not, will be included in the data table to be uploaded for processing. So a question from Montreal is asking about the p-values in the example. You can download the data and open the CSV file and solve it to see which one is the minimum. So it really depends on your data. You can have a lot of very small p-values. You can have a lot of very large p-values. You can also set a different threshold later. So here I'm just trying the first IBD example data here by clicking the submit directly. You can also download the data or use a building example directly for learning purpose. Here we are going to show the data integrity check result, how many peaks have been included and what's the mode and how many columns in your data file. It should be recognized directly and automatically. You click proceed. It's going to jump into the parameter page. Here you can set a different parameter for your processing. First one is the p-value cutoff. As we have mentioned, you can have a different p-value for your data. You can set a hard cutoff for your data sets. For example, if there are too many small p-values, maybe you need to reduce these p-values as a very small value. If all the p-values are very large over 0.05 or something, you can set a relatively large p-value. So this really depends on your data. But here Metaband will analyze your data directly and give a default value based on the top 10 peaks. Only the top 10 peaks will be set as a p-value threshold here. Here you can use the GSEA together with the MamiChalk or MamiChalk only. Both of the algorithms have their advantages and disadvantages. Briefly, GSEA is more sensitive than the small peaks to the peaks. Here you can use the scatter plot only because you have uploaded the peak list. You can also select the different knowledge database based on your biological context. When everything is done here, I'm just using everything as default. You can click proceed. It's going to show the result as a scatter plot here. You can click the node to see the potential heats of the compounds in these pathways. All results will be listed at the table. You can download the tables and compound heats and also explore the results from the network view. This is the task you are going to do in the coming 15 minutes. I'm also now going back to the home page and trying to show you another function in this module, the peak intensity table. Here we allow users to upload a complete table, the same as the uploading format for the statistical analysis module. Here we can use the second example or third example. I want to show you a little bit on the format. We can download the table directly and open it with the excel or something in your computer. You can see we have different samples listed in the column and the different features listed in the rows in the first column of the different rows. Here I have to upside them. The format of the features are formatted as m over z and double on the score and the retention time. We have to format the table feature in this kind of format and upload all the complete table directly into this module. Here I'm going to use the third example, malaria. This is relatively simple and also being used by our nature protocols. You can try to also use this example and try to see the results. Here we click submit. The data will be submitted directly. This is the data integrity check again, show you the general information and how many missing value and missing features have been detected. We can click proceed. It's going to show you how to do the data filtering. You can filter based on your needs. We can click proceed directly. Here I would like to do the log transformation only. We can show the results. It's already nicely shaped. So let's click proceed directly. Here we have two options. The first is the scatter plot, the same as what I have shown before. And the second one, we can use the key map because here we upload a complete table. All features in all samples have been included. So we have more information compared to the simple feature list. We can use this key map to show something different from the previous option. Here I click proceed directly. So the features will be displayed in the key map. You can use this panel to do a lot of processing. Here I want to show you we can do the functional analysis based on patterns. Here I just do the feature clustering first. So here we can clearly see the features have been clustered based on their intensities. We can select a certain part. Here we just got this kind of part. And also we can build the different patterns together. Here we have this module. For example, I can select a part from this module. So the part I have selected will be go to the lower panel. And then I can select another pattern here. Here I have some features I want to select these patterns. I can just use the mouse. Click and select this part. It's going to be built below here. And we can use this one to make it to the focus view. So this is the part pattern I have selected. And I will run the functional analysis only for this pattern. Click here. We can click submit. So the functional analysis based on this pattern will be executed. And the functions based on this pattern will be reported at this kind of list. You can click around to see the results. So this is a very quick demo. So you have around 15 minutes, 14 minutes to practice. So I'm going to go back to the slides. So here are the learning questions. So you can always slap and hand up. We can go around to answer your questions. So for this session, you're going to use a building example to explore the functions. Okay. And you can also download the example into your local and upload again. So it's a same thing. So a lot of these examples can be directly generated from a statistical analysis of the LCMS spectro processing if we do the t-test. So for the interpretation, the best one is two groups. So we saw about t-test for the change. And if you have multi-groups, you're clearly ranked by p-values. And do the enrichment analysis. The thing is that interpretation will be hard just because you don't know multi-groups who's the most changed and how to interpret. So if you upload that as interpretation will be hard, but it's up to you. So hello, everyone. And we are about to finish the lab. So if you are not finishing, we still have half an hour break. So you're welcome to continue. And if you have questions, just slack or post on the forum. And there's also, we have the tutorials and nature protocols. So there's a lot to do. And so if you go home and want to continue, and please feel free to do so. And we're going to take half an hour break and start the last module on multiomics.