 Well, morning everyone. We made it to the final day of the formal large meeting. Good to see you all. I hope you have enjoyed your time here in Brisbane at QUT. So bear with me. I had a double espresso just like 10 minutes ago so hopefully that's working for me. So, but yeah, really keen to share with you a collaboration that we had with the Department of Agriculture here in Australia and really thanks Galaxy because has been one of the platforms that has enabled this conversation to make an impact for horticulture industries. So horticulture crops in Australia certainly play a role in our economy. It has been estimated that the value of the sector is around $14 billion in the last calendar year and it's anticipated that that value will increase by another $4 billion by next year. So, as you can see, Australia exports to different regions around the world and these are high value markets. And from the point of view of our economy, we're really keen to protect access to those markets. So how we can do this certainly you have experience of a few and thank you for coming all the way down to Australia. It can be a bit of a journey to come down here but that journey also is applicable to exotic pests. So we do not have many of these pests in Australia and we're really keen to keep it that way and with the more trade that is happening more movement of people and resources we're really keen to continue to make sure that we safeguard the industries. So, some of you may have experienced some coming in that yes, you need to be across quarantine regulations, which is perhaps for some of you may have been a culture shock. But yes, we are really keen to have those processes in place and if someone tells you sorry mate we misplaced your luggage. But actually turn out that your luggage needs to be screened for three years and we'll get back to you in three years. So probably you may not be very pleased with that. So, and that is actually what is happening with our industries currently. So, any new germplasm that they bring to Australia, they spend a really long time in quarantine testing and eventually they, they get tested and release once it's safe. And then they can start the businesses so as you can see as a business owner that's not really something that is of appetite because you can keep your, your, your market share, either domestically or internationally. So we really need to make our industries more flexible are more agile to respond to market opportunities. So, but why this is happening is because we, we, when we started this project, the main driving force for diagnostics is relying on molecular assays and surgical assays and biological indicators that can take really a long time to be performed. So, in that sense, yes, we need to have an alternative that can be more scalable to also improving new regulations. So we have 217 viruses which are of by security concern in Australia, and we need to make sure that we look at all of these. There's also the challenges of, if we continue to rely on existing processes and technologies, there is less and less appetite for new students to go and acquire those skills. There's a decline in expertise, for example, in biological indexing, shoot if grafting so if we want to be future proof, we really need to start to rely on alternative solutions so we engage with the Department of Agriculture will listen to their pain points and how challenging is for them to cope with new updates on what is being regulated throughout the year and they need to respond to these regulations. So, as you can see quite quickly, it wasn't just something that they can maintain and sustain to be able to address. So, we decided we need an easier strategy so can we tap into this into a solution that can be sustainable. So we didn't realize quickly that we can leverage from the host planning immune system. So plants have done all the work for us. So essentially, we did not need to do anything ourselves because throughout evolution. So plants have evolved to respond to infecting viruses and they rely in these small RNA pathway which is called RNA interference, and there are specific enzymes that responds to viruses and one of the really nice features of these enzymes is that they can recognize these viral molecules and fragment these molecules into quite specific sizes so when we're building that analysis strategies we can rely on these specific products that we expect to be coming from the response to viruses. So, briefly, I would like to share with you what we've been working with the Department of Agriculture over the last four years or so, certainly COVID has been part of this journey so we share with you as well that, and we were really keen to look at, yes, what are the various STS technologies, there are many flavors of STS technologies that are being used. We had initially identified in the proof of concept that N is referred to that small RNA pathways might be the way to go for us. It gave us the opportunity in this partnership to step back and look at the bigger picture and see is this the best way forward because different agencies around the world they use different approaches so we're really keen to look at the technology and then look at running large scale trials to make sure the processes work and then I would like to share with you a couple of technical lessons that we learned in the journey and then finish up with the policy adoption. So, smaller than ACS, I don't really need to dive too much into it. I think it's a fantastic proxy when we are really keen to know whether a virus is actively replicating within the host. It gives us that possibility for us to assess that and measure that activity. But one of the drawbacks of potentially relying on this strategy is some viruses have evolved strategies to suppress the RNA interference in the host. And we don't see that is 100% effective in removing that activity, but there is a consideration, but also there can be some of the viruses can be dormant and not really replicating. So, relying in alternative approaches and technologies can provide us that that possibility to assess whether yes we should rely on the smaller days or perhaps we should go and look at an alternative strategy which is in this case, rather some of the police in RNA, I think it's a sensible approach. We started with that initial question. We wanted to address which technology we should take forward. We rely on positive control set of plants that we knew what viruses and something that is called baroids. They are really small circular RNAs that are present in plants, they can range between 200 or so nuclear pipes to 600 basis, and they can also cause a disease phenotypes in plants so we're really keen to to look at these plants that are not only infected with viruses but also with these smaller viral molecules. So, as we started this journey. We wanted to continue to work with the initial partner, and that was an overseas partner that we initially work to develop the proof of concept strategy, but looking at implementation and adoption in Australia. It was not really sensible for us to think we can move information overseas so we needed to have a local solution that can be utilized if we were to adopt this technology. So we're really keen to start that conversation and we started to work with a local provider. So, we start this journey of looking at the detection of these viruses using these two approaches with the small RNAs and Robosomal STS, and soon we started to see that we observe some differences of what is known from the positive quarantine testing and legacy information on the selection of these positive control plans. So, on the site of the small RNAs we were missing a virus which is called citrus tracessa virus this is a very interesting virus in Australia. In the world about nine genotypes of CTV are present and we do have some of these genotypes in Australia which are endemic but some of the genotypes of CTV are exotic so we do test for CTV so that we prevent entry of new strains of CTV but the smaller RNA was clearly not detecting any signal there. And when we look at the alternative Robosomal depletion, we did observe a minimum signal, but it wasn't a strong signal to suggest that it is present so when we did compare with the molecular plant, we did observe from positive quarantine, the sample that we sent for sequencing the first one on the link wonder, it was certainly missing, we could not detect a signal in CTV, and also additional extracts from that particular plant also means that so yes, it was certainly missing in that in that plant. And then, when we look at the Robosomal depletion. We did miss Hopistan Varoit, which is also another pest of interest, but when we look at the molecular assay data, we could detect the so we were not detecting in this particular strategy that we utilize with the service provider that particular pest. The lesson that we quickly claim to realize and when we work with these technologies is that when we utilize public data, we don't necessarily keep the metadata information when the data is being generated so sometimes when we generate data we try to go and aim for low cost data generation options, but that could potentially impact what you are detecting in your samples. And that is something that we've learned that there can be a phenomenon called index hopping events or cross sample contamination, but that essentially means is when you have some signal that is strong in one of your samples, you could detect that strong signal in other multiplex samples that you have in that particular experiment, and you can have a false positive so we really need to be mindful on that. And certainly, we did detect these two signals with a smaller stick in these control plans, but yes I suspect that we could not really validate them. So we look at the rubber some of the pollution. Yes, the initial strategy that we use. It was a bit of daunting picture because we could see that a large fraction of the samples that we were testing were reporting these false positive contamination so from the point of view of quite agencies. When we report to them, we found these, they need to go on and inspect and assess and validate these so as you can envisage that can be quite a daunting overhead on resource allocation to follow up this erroneous reporting. So, we were really keen to improve this so we started to partner with a local service provider and gave us the opportunity to test on a strategy called well indexing that can minimize these false positive reporting, and this was roughly three years ago, or so. And we started with that option. The good news for us is that yes we can detect all the past that we expected to test with a smaller seek a strategy. We also detect everything that was expected to be detected with rubber somal depletion are a secret strategy which would include in the hop stand baroid that we were initially missing. And when we look at the cross sample contamination. Yeah, we saw less down team picture we could see less cross sample contamination with rubber somal depletion, we did not observe this with smaller and a sake, but we then realized that it was because of the instrument that we were using back then so next, next sick instrument at less prone to this cross sample contamination, but if you use Nova sick instruments, you will start to see this effect so you need to be mindful on the instruments that you utilize. So, at that point, we thought it's sensible for us to go ahead and utilize a smaller nice as the strategy going forward to start to use larger scale testing, and for enabling this conversation on this testing. We implemented to analytical workflows. Initially we implemented a prototype workflow using next flow. And as we learn more lessons throughout the processing of the samples. We optimize the workflow. And now this is published and accessible to anyone. This is called the report. And that has been our initial prototyping of optimization, but simultaneously. We also utilize galaxy, and that has been really fantastic resource to work with galaxy Australia, because most of the plan molecular biology is a post entry quarantine they don't have these data analytics expertise. For them, it was really fantastic that we provide them this galaxy experience and interface so that they can see how the data gets process, and they have visibility to all the intermediate information that it would be required by them to analyze and make a sense of what has been detected. The resource that we found quite rapidly important to develop is a curated database for viruses and this has been highlighted a couple of days ago that we need to have a uniform approach to dealing with public information. It's the same for us. When we work with public resources of our information. We defined our strategy that we needed to harmonize taxonomic information of the viruses as a way for us to make sense of this and make a uniform approach to harmonizing descriptors. So we utilize an adopted data strategy. Additionally, we go and look at the taxonomic information that is assigned through the international committee for taxonomy of viruses. So we make sure that whatever is harmonized is compatible with the experts. Additionally, we do what is called we define representative sequence for highly related genomes. So when we're looking at public data as you can anticipate there might be some minor differences in terms of genomic data. So it can be a little bit of a challenge to make sense of what is detected and consistently so what we decided is to remove that ambiguity in terms of what we are detecting and what we are reporting. So we decided to define what is a sensible representative sequence for highly related genetic sequences. So we use those as references for diagnostics. And the benefit of doing that is that it can result in a very light database that can be used within Galaxy, and it's very effective in running these processes. And we, with those resources in place at different stages of development, we started the first season of testing, and we secure access to 188 samples across different commodities. We started the first set of of processing. We essentially was starting to be part of in our life and initially we didn't feel the impact of it, but then soon after, yes, it was starting to impact the supply chain, or as to continue to do this work as resources when being directed more to hobby testing. So we decided we need to build resilience. If we want to adopt this technology, we need to define a strategies that can build a more sustainable approach in Australia so we'll get back to that in a moment. So we needed to go back to some of our partnerships that we have overseas to continue to progress with this activity. And we learned a few lessons along the way. So the more valuable lessons were where we were not detecting a pest that were detected through authentic quarantine protocols. So we were able to have that firsthand knowledge access so that we can optimize our analytical pipeline so that we can get to this 100% agreement with the post entry grant protocols and that was really valuable experience for us. Another challenge that you can to realize when you are dealing with these biological samples is that we have viral sequences that can be integrated in host plant genomes. And one of those cases studies for us was working with raspberries, and this particular virus which is called rubus yellow net virus, and we reach out to Driscoll us because they have been working, and they provide germplasm for raspberries, and they were trying to create a sense of how these viruses are being integrated into their germplasm they are providing to industries across the globe. So we got access to some of their early insight knowledge for us we of interest for us is the one at the top. It's an episomal strain of which is of by security concern. And so far, we were not able to see any positive detections for this alone that journal. We also were able to detect new to science viruses, and we identified a new pot of ours. And so we were able to recognize by having a very long polyprotein, and this is a bit more than 3000 amino acids polyprotein that encodes a range of, of operating frames that sustain the function of this particular virus. And so this was closely related to sarcoma suck virus, and also sugar came as a virus, and we tentatively name it miscanthus in nc's most like bars for review by the ICTV. And we mentioned that, yes, COVID was impacting our delivery of milestones within this project activities so we're really keen for us to, to partner with more service providers in Australia and we invite them to undertake at least two different strategies, where they can use, and perhaps they might be already within their R&D pipelines to use and deploy as new services to the community, so that we can test and assess which one might be suitable which ones might be suitable. Ideally, we have some redundancy there. So what we learned is a long story short is that Cajun micro elaborate keyed having with a unique molecule or indexing and unique dual indexes was the strategy that was preventing this cross sample contamination that we did observe when we moved from instruments to a higher throughput. So we do recommend to utilize this strategy, having said that there are still opportunities for us to improve and we'll get back to that in a moment. So, we continue our testing on our second season of testing. And as we expect with most of these germplasm that it gets imported to Australia. And to be quite clean, we only detected in just around 8.7% of the imported plants, regulated viruses of by security concern to Australia. So, it is important that we have those detections, and in addition, depending on these industries, they are interested to know which other endemic viruses that we already have in Australia, my view of interest. So at that time, we were looking at reporting some of these viruses of interest for a specific industries so that we can provide that information so that they can make an informed decision, whether they would like to build a business portfolio around that new germplasm that may have an endemic bars. As part of this we also detected another new designs virus and I think there is opportunities to continue to build and have these resource available to the community. It looked like a potty virus but it turned out to be a pot-ex virus instead. And this is now the moment under review for publication. So, a couple of lessons that I would like to share with you in terms of looking under the hood. First, it was really informative to optimize when we're working and deploying and implementing end to end analytical workflows. We're really it was really informative for us to understand the source of the genetic information that we were collecting. So, because that allow us to improve the standard operating procedures from sampling, collection of RNA, but also library preparation, and we partnered with Cajun, our team to implement strategies to try to minimize the collection of primitive data. We use a representable depletion strategy for a small RNA-seq data which is not typically used, typically use this representable depletion for longer RNA-seq experiments. And it has been working well for us to deplete long RNA, representable RNA fragments that we can see in these libraries, but it still is not very effective at removing smaller fragments of ribosomal RNAs that, as you can see, is still present in these libraries. So, we're really keen to continue to work with them to define strategies that we can remove these non-informative information being ribosomal RNAs, chloroplasts RNAs, mitochondrial RNAs, which are not really informative from the point of view of technostics. And if we are, and we are in this currently looking at implementing some new R&D strategies to try to specifically pull a smaller RNAs that are derived from viruses. Another aspect that we were initially thinking what are our controls that we can use as part of these experiments is how we can assess cross-sample contamination. And for addressing that question, we use an alien control and we use the Miscanto sample that I just mentioned earlier. That is a virus which is highly expressed. So it's a really good candidate because the signal from the virus is quite strong and can be detected in other samples if there is a virus hoping event. But then we are also interested to look at when we have negative detection of viruses, water strategies we can use, and we decided to tap into a synthetic microRNA, in this case we use the CLEG and microRNA39, which has no sequence similarity to any sequencing plans. So, and we use these as our work control to also measure the sensitivity of detection in these plans where we have negative signal of detection from viruses. So, policy, it's been a bit of a journey for us to work and has been fantastic, the partnership with the Department of Agriculture. One of the blessings that we have when we work with the department is that they are really keen to have relationships and the initial trigger of this activity was when Mark Washam, back in 2012, said this is our pain point, viruses, we need better approaches to detect these viruses. And then we were involved in that process and we received these plan-based securities, CRC funding to develop the plan-based, the initial proof-of-concept approach, and we deployed the solution in something called YABI, which stands for another bioinformatics interface, and back then we thought it was sensible to use that platform because, yeah, that was part of the platform support in our host institution, but then we realized that we need to provide and build resources that are more accessible to the community, so hence we use now other platforms to provide these solutions and access. And the journey that I'm sharing with you is the journey that we had recently, we received funding from horticultural industries to look into implementing this technology for routine testing at quarantine agencies, and also how we can potentially upskill the team, the molecular biologists, plant pathologists within the quarantine agency in the Department of Agriculture to have the expertise to adopt and utilize the technology for routine work. And I'm glad to report that December last year we have formally adopted this technology for the first year of commodities, and these are prunes, rubus, and strawberries, and this is in addition to ornamental grasses that we adopted during the proof-of-concept facing. So, and we are starting to roll out now these as a routine service at the Department this year, and the plan is to eventually expand these to other commodities. So, but all this is just possible because the partnership and the relationship, the regular engagement, so we had monthly meetings throughout the last four years with the Department of Horticulture, and that has been fantastic to have some of the champions, Mark Gwatham, Julie Pattermore, Akandis, Elliot, and also Greg Murphy there in the picture, Adrian DiCel, all of them have played a major role to engage different teams within the department, policy regulator teams, operations team, IT teams, so that we can all come to the table and work towards adopting this technology. Without them, I don't think this is possible. And then certainly, Galaxy, Galei has been a really instrumental interface and platform, and I would like to acknowledge all of you that contribute to having this resource available, because that has been a platform for us to engage and communicate with the, this stuff within the current agency so that they can get visibility on how the data is generated, how to adopt strategies and technologies, and we implemented some of the material user guides, videos that allow them to use this technology. And now we're working with the IT team to implement a portal, and this is an in house portal is not available to external stakeholders, but this is internally to enable them to implement operations and brand these diagnostics for these commodities that are currently adopted for HCS testing. So, from the point of view of what they need to do, at the Department of Agriculture, we wanted really to streamline and minimize their hands on involvement. We weren't excited and passionate to learn new skills, but we know how busy they are. So we wanted to really minimize their hands on involvement in the whole process so all they need to do now is just basically collect the samples, and everything is automated. So, the samples are sent to a service provider and the service provider, provide us an email with a JSON file within metadata information of the samples and that triggers an automatic ingestion of the data. And that will trigger automatically the processing of the samples we send automatically the data to Galaxy Australia for processing. We fetch back some information from Galaxy Australia for further processing in the cloud. And once that is processed we go back to a galaxy Australia and run a second set of processing to then generate this final report. And in addition to that we have also this next flow option. It's important to build redundancy. And we have these possibilities. So I think he's sensible to have this. The hobby is yes you need to build as much resilience and redundancy so that you can ensure operations are not impacted so that's why we decided to use and provide this to our partners. So what are the benefits for positive quarantine and for horticulture industries. We have reduced certainly a number of essays and now they need to run for the adopted commodities. Currently hts as a first pass screening so that informs what molecular sites they need to be run on which individual samples so it is more efficient in terms of resource allocation to clear those plans in terms of quarantine testing. And also provides them quality metrics to inform their decision making on these plans that are important and the other benefit I think which was really important is when we have this partnership with academic institutions. It's not only a research collaboration, but also we can transfer that knowledge and expertise. So that they have that capability in house capability. And for industries. Yeah, we see the benefits is cost savings in terms of this it can be quite an overhead to import this germplasm and credit test for. We do see that is one of the initial benefits for them but more importantly, those cost savings can be used to bring in more diverse genetic germplasm that can be used to test how they adapt to the Australian environments and they can build a more resilient business portfolio. Another aspect that I touch upon is, yes. We can look certainly at what are the profiles of of viruses that can be in these commodities. And there is now the particular and the visibility for the industries, whether they would like to build a new business portfolio on a particular commodity that doesn't have an exotic products, but it may have something else that could be of a concern for them to have a sustainable return in their investment. And now we're thinking on implementing some new strategies of how we could potentially implement a strategy is now that we know which viruses are present in this material, how we can assist in cleaning those plants from those viruses. And that is something that we are currently interested to implement. And the biggest benefit in terms of profitability and ability of industries to remain competitive is having access more faster to this important material so and that is something that we're that is part of the journey. So we think we are in the process of using HTS and in operations routine testing, but we still need to rely on on existing standard protocols they have been using over the last two decades. It will take a bit of a time to have full advantage of this technology, but we see once they are using this technology and this other available assays, eventually they will move to adopting this technology as we know it can be more can be rapid in terms of transactions. And also, as we build these analysis pipelines that are robust, reliable, they can have the confidence that they can build the business portfolio, and have this material release earlier. So, in terms of future perspective, I still there is a gap for data generation. There is a lot of data that gets generated and deposit in public databases but a lot of this data is non informative so we need to really develop strategies in which we can generate data that is more sensible and bespoke and can be used for addressing specific needs and initiatives. And we are implementing and we're testing will be testing with over the next couple of years, some strategies in a partnership with a new collaboration looking at seed testing over the next four years or so to assess whether we can make these more specific. The other global initiative, which is important. It's good to have these operations in a in a in a domestic setting, but the biggest impact is when we have this understanding and harmonization of of using these technologies across countries across jurisdictions. So, we are working in a you fresco collaboration. This you fresco initiative is engages 23 countries around the world at the moment we have 10 countries that are interested to look at how it is utilized for plant diagnostics, and that is an initiative that will be rolling out over the next couple of years and we are committed to have an understanding of the best practices and processes across jurisdictions, so that we have a harmonization at the technical level, and then eventually we can move on to a policy agreement that can facilitate trade so the long And from one of my colleagues mark wasm is can we have a plan passport so that can remove these barriers of trade so and that is something that potentially that will come in a few years I think there's still a few building blocks that need to be put in place so from our lesson on implemented this technology in Australia, it took us a bit of an engagement with policy to get to this stage and when we are now looking at global adoption of these technologies I think we need to have that early engagement and champions within the individual organizations to have that engagement that can facilitate this adoption. So, with that, I would like to acknowledge. QG, and the team of QG, Maria Emily, and the work will be report really walla in the galaxy implementation of the report, such a real leblanc and that work on the miscantus work and also in the implementation of the custom database. So, with the implementation of the next flow solution. For this activity, certainly. Professor Matthew bell guard for hosting this activity here QT Chris Williams and Hamish on providing infrastructure that enable this to be our colleagues from the Department of Agriculture. I'm really keen. I have not all the names here but some of our champions, the list is much longer than this and we are really blessed that they have been involved in this activity. Our culture Victoria Galaxy, Garrett Igor. Thank you for that support and throughout the journey and our partners from horticulture innovation. Thank you to CRC and our colleagues from New Zealand that have been also part of this activity. And with that, yes, I would like to thank all the people involved in this journey. And happy to take any questions. Thank you. Excellent. Any other questions from the audience. Yeah, I mean, that was a really cool talk and really cool to see like. Yeah, where you taking the analysis I mean, I think it's a really interesting approach. And I actually I have a bunch of questions but maybe just the one that I'm most interested in. So how do you decide which viruses are problematic. So when you do this type of analysis you also mentioned that you find some novel or unexpected viruses in these samples. And then that you couldn't detect these viruses in endemic species is like I imagine a lot of it is like barely studied and then following from that do you keep the raw data that you collected for retrospective analysis in a couple of years. Good question. Yes, what we do is we have a decision free in terms of based on the HTS evidence that we collect for any of the detections, depending on the quality metrics that we have defined we can assign when we have enough evidence, we can assign to a specific viruses. But then there would be cases where we have a low express also called low tighter viruses that could be present. And when you are collecting this type of data is very reasonable to think we can get a very strong signal so we classify them into a different category which is a potential detection of those viruses so what that would trigger in terms of the decision making is we need a follow up molecular assays to verify those cases that are matching, for example, viruses or by security concern specifically those are the ones of interest. So they can go ahead and run those assays on those. And another category, which is, as you pointed potential this new to science viruses those viruses that are divergent, and they can be more of a challenge because we rely on a small RNA technology. So what we can resolve when working with this type of cases is relatively a small fragments of their potential genome. So what we are now considering is implementing Oxford nanopore approach for those cases where we have those signals and we can use that to determine that we have to resolve larger portions of their genome. So we are working currently into a developing a follow up long as technology that can provide us more visibility on that angle. So both of the data is, yes, it's certainly quite important that we keep the raw data, and this raw data is kept. I just alluded that automatic is an automated process that bring in the data to the cloud. And what we have is an automatic archival of the data from the point of view of the policy regulation. We have confidence that any raw data that has been generated has not been touched by no one. So that is part of their couple process. No, no one touches that data that is archive and at the moment we're considering to keep that data forever. Typically, depending on initiatives, dealing with this kind of data, they can keep the data for seven to 10 years, typically. And for now we've put in place strategies that can allow to have much longer retention cycles for those data sets, but yes, it is important to have this, this backup data. And how you say that, yes, we do have an alternative copy that we use for the processing of the pipeline but yes. Thanks. Great talk very important research. I'm curious about your slide on the workflows, where you have next flow and galaxy and we talked a little bit about this but I'd be curious to know whether elements or features in next flow that you found easier to do and perform in and then vice versa, where the elements of galaxy which you found were easier than doing in next flow. Awesome. Thank you. Fantastic question. Yes, what we one of the strategies where we decided to have these two flavors of deployments of the pipelines were for two different purposes initially. So we decided to have the next one implementation because it's a very portable solution that can run on multiple. So it could be the cloud could be on Prem HPC could be a local server. And we were really keen to use that solution because you can upscale quite easily, you can run thousands of samples without any changes to the code and can really upscale to any demand that maybe require. So, one of the, the process of using next flow is that we can automate the whole process from end to end from the initial use of the road out all the way to the generation of the reporting. And when we look at galaxy, we use galaxy for a different purpose, not really initially, not really for processing all this data, but for training, we're really keen to provide this training element to to personnel of the funding agency, so that they have that accessibility, because as you as you might be familiar with next flow, you need some digital expertise to go on and deal and make sense of that and access to those resources. So which are not readily available in the team. So we're really keen to use galaxy for that. One of the bottlenecks we found is that not every tool or not every option might be yet available in galaxies so that's why we run an initial process in galaxy, we go back to the, to the cloud and the footed process and then go back again to galaxy to complete that process. Perhaps that may be one of the, the, the initial bottlenecks that we have experienced, I mean said that, I think it has been really informative for us to use galaxy to provide that accessibility to the end user and I think it's important to have that engagement with the users because at the end of the day they will be making decisions on the evidence. So, and what we work towards is to have the same information coming through galaxy, but also this next flow, and having access to these two, it provides them redundancy in terms of, as we know, as users of galaxies, sometimes we have some maintenance, for example, that we may not be available to use some of the servers. And if we are pressed for time, we can rely on next law, but next law can be potentially a bit more cost demanding on using the resources cloud services so in that sense yes, it's really good to have galaxy as well available. So, a follow up question. How, I mean, would it make a difference in the development of pipelines to you, if you were able to more easily just add in a command line and run it as a tool. So, like, you take a script, maybe have an interactive way to like see if the script works. And you can just use that as a tool is that something that would be interesting. Yes, absolutely. I think that will be really will provide a lot of facility for the community, because we implement this in house scripts for processing of the data, whatever that process it might be. And you have an easy way to incorporate that into galaxy I think that will be very useful. And one of the things that was initially chatting is having these histories. So we process about 400 samples in this project and looking at 400 histories can be a bit of an experience for someone to go and look and make sense of that information so having the possibility to have an script to go and pitch reliable. And that is needed to make sense across all those histories and merge that and combine that in the single history. That will be also very useful for us so we can go and fetch these outputs from all these individual runs and go back into having a single site where we need to go and look at that will be also of interest. Thank you. You mentioned at the beginning but just briefly that you know this is sort of big business and if there's delays in this or probably you know areas in this this costs some people a lot of money. I noted that you know I know that a lot of the sequencing providers you use to diagnostic services and they're accredited. Can you just comment on the need for accreditation of the computational environments that you're doing the analysis on as well and how do you you know how do you ensure that they're doing what they're meant to be doing. Thank you a fantastic question again. As part of I haven't alluded to this but as part of the implement adoption of this technology we need to implement the standard operating procedures and to end along the journey of how the data is dealt. And also, that the generation of the data, because we can say, small RNA is the way to go. And, and with you and unique molecular identifiers or unique unique dual indexes. We find that combination to be the most sensible. But as you go and rely on different service providers utilizing that it may not necessarily get you to the same outcome, in principle it should but not necessarily there is warranty that that can happen. So as part of this process, we have a set of positive control plans. So, and that is at this stage, we work with this particular service provider, working with this positive control set of plans that we've optimized and we know we can detect. So there is that evidence, but if you were to go and work with a different service provider, then we'll need to go through that process where we yes we need to make sure we are still getting the same information. Yes, and that is part of part of the process for us. So that can be certainly a challenge initially when we started to use different providers we thought we should get similar results but not necessarily. And also this can be impacted because you could be using for example, a novastic X, and that is a much higher throughput instrument where you may be processing your samples along X number of people. And also impact what you are detecting. So yes, we need to have that visibility of what else has been added to the multiplex experiment in that particular run. So we would recommend yes not really. I mean yes it's separate point of view of cost effectiveness. This is good to use this larger high throughput instruments, but from the point of view of ensuring that making sense of what signals we observe we need to have that visibility of the metadata of the samples that were used in that particular experiment so and that needs to be considered when looking at making sense of these and also having this accreditation, although that is not a formal conversation yet. That needs to, we need to engage that accreditation, but I think it's really sensible that we have that. I think I'm glad that I typically I talk too much so I tried to cut a lot of things, a lot of things from here it goes last time I talk I went a bit without any time for questions and so yeah. With this, I would like to thank the, this is my first galaxy conference. So really thank you. I would like also to thank you all for developing this resource. I'm making it available to the community. As you can see this is just one little case study where it has been valuable for us to adopt a particular technology to hopefully make a difference to businesses and our ability to be competitive profitability. I mean, there are more impacts on these because as we reduce the inversion of this past, we can also protect our nature, our environments as well. So thank you all.