 All right. Welcome everybody to our first, our submissions working group meeting for 2023. Normally Joe would be the one speaking here but I'm going to be filling in in terms of just running the agenda and kicking things off here. But certainly wish Joe a speedy recovery on what he's been going through and certainly if you're watching this show hopefully after the fact I do a halfway decent job but you always do excellent job of running these but First, before we get too much further we want to introduce our one of our newest attendees to this representing the art consortium. Luke Shan. So Luke, I just say a couple words about yourself if you don't mind. Yeah, well thank you Eric. Hi, my name is Luke if we haven't met before I'm the new program manager for the art consortium. I also work with the open JS Foundation. And, you know, I'm attending all the meetings in the short term but I'm sure I'll be participating and working with you all in different capacities along the way. Awesome. Well welcome aboard and yeah we're excited to have you and join this. Hopefully will be a very productive year if last year is any indication we had a lot of great progress and there's a lot of new ideas and new avenues to pursue this year as well. So I have a bit of I've been looking a little bit before the meeting about some of the audience we touched on in December that we can talk through some of these I think will be more quicker than others but maybe we'll kick things off with maybe an update on some of our attendees from the FDA side with our pilot three shiny up submission I know Paul and others when we last left off. It looks like it did go through successfully into the ECTD transfer, but we just want to do a quick check to make sure there weren't any issues on your side that we needed to talk about or any what's your progress on that. Pilot three or pilot two. I'm sorry pilot two. I was wondering if I was missing something there. Yes, yes. Numbers are hard in January. Yes, sorry for that. So now that's okay. No, actually we were able to follow the directions and implement the shiny app. I did it with both the initial directions and the console. I might suggest some more details kind of outlining those instructions a little bit more. Okay. But that's more of a implementation issue. Okay. So, and I think he sue had some suggestions as well or things to narrate we did. We're still trying to figure it out. I had some weird error messages, which if I ignored worked. Okay, so we're not sure if that's just a result of how things were being pulled from some of the sites at the time, or if that's something that needs to be addressed in the long term so we'll do a little bit more digging on our side. Yeah, very appreciate that feedback, Paul, and certainly I'm happy to consult with on the specifics of what you were seeing as well as, you know, as we think about future pilots in this space, especially they're going to involve shiny or some other, you know, fairly complicated infrastructure and certainly want to make sure that the instructions are clear and making sure they, you know, get you up and running the right way. So I'd be very happy to follow up offline or if you want to put some of your notes on the GitHub repository that we have for the submission. Either way is fine with me whatever would be best working for you. Okay, we appreciate that. So do you want to share some of your observations or experience. I mean, hi everyone. Happy New Year. Same thing, like I was able to follow the EDRG and then successfully launch the R Shiny application. There's no my like major issues. I found some minor issue like error messages and warning messages, but I'm trying multiple. I'm trying to launch the R Shiny app in different machine, like my personal, like scientific laptop and like remote machines, so that I can compare the result in different situation. There's lots of error message like inconsistently so I need more time to see what is happening and what's the, what's, why is it causing the issues. So, I just wanted to give you guys an update on progress that we were able to follow the EDRG and then launch the application. Excellent. Yeah, thanks for that feedback and I've always been. When we did the testing of this we really targeted the windows environment as a quote unquote local install as the primary method that it seemed like you all would be accessing the application but I'm very intrigued when you mentioned the different environments kind of those issues you saw, especially for future development so feel free to share those once you've had a chance to consolidate your findings. Sounds great. Thank you. Maybe we can give you more updates on February meeting. Okay, like offline. Yeah, yeah, happy to anyway like to share those will be happy to hear it. Awesome. Thank you. Great. So yeah, over in that feedback that anyone else have any comments or questions on how pilot two is gone. Since we last discussed. Okay, I'll take that as we can move right along to the real pilot three I should say, and I believe Joel you have sent around request for feedback on the proposal when you give our team an update here on the progress on your side. Yeah, thanks Eric for that for the time. Yeah, so just a quick update on pilot three. And so yeah the proposal was sent out, like early December, I believe, and I unfortunately I couldn't get it on the Arkansas to him get for some reason, my access there wasn't able to But it is it is in our pilot three gets so hopefully everyone does have access there to kind of take a review of our pilot the proposal, essentially the proposal again is just kind of to the FDA to ensure that when we package things up they're able to execute it. And so it kind of outlines all of the details there, pretty much that you know kind of the same approach that pilot one and pilot two took. So we're hoping to follow that and see if there's any other feedback any other alternatives or ways we can get on how to submit pilot three since this is more Adam data related. So if yeah if I am seeing some feedback from our pilot three team but haven't seen too much feedback from other other folks from the consortium so I'll pop it in here the link to that proposal in our chat window. And hopefully everyone can hope yeah hopefully Eric can pop that into the minutes and then yes, absolutely. So yeah looking forward to more feedback on that so then we can update that as as needed and if final then we can you know have a final approach on how we want to submit pilot three. But yeah so with that though, where pilot three is ongoing and progress is looking good. There are the five data set five Adam data sets that we're focusing on to help generate the TLGs for our own. Yeah, I think it was just a table and a graph actually from pilot one. So, having generated those and drafted those were now trying to get the pilot one scripts programs that generated those outputs to see if we can use our Adam source data for input into those those scripts to re output the tables and graph again. We are running into like minor issues with just ensuring that we are kind of using the same define and specifications from like the original seat is pilot data. And I believe there was an update after the seat is pilot data with the test data factory repo. And we're, I'm not too sure which one is truly the, the most up to date specs that we should follow for the Adam production here. So if anyone has any info on that, that'd be great otherwise that the TDF define is the one that we're using now because per their repo seems like they are kind of, they've updated since the seat is pilot repo. And as shared their data sets there. Um, so that's, that's one, two things. And then lastly, since we are, you know, this this consortium is kind of looking into new ways of submitting to the FDA. You know, Thomas is always coming up with other ideas as well and with this submission there. Thomas had just sent me a link this morning to a white paper, I guess that fuse 2017 there's a paper that says transport for the next generation. I know traditionally when we submit data sets to the FDA we, we put it, or we converted to XBT format. And with that I think from this paper it seems like their third thoughts on maybe newer ways to convert the data, other than XBT to submit to the FDA so And one of the ways Thomas was thinking was, I guess C disc is proposing a data set JSON file. And so I kind of actually wanted to know if anyone has any experience or has has seen the this proposal before. There's a wiki link to that. So maybe dip into your mind a little bit Paul to, I believe this, this fuse was in collaboration with the FDA as well. If, if we can also submit in this data set JSON file instead of XBT's. I had heard indirectly that JSON was being proposed. But at this stage, I don't know that it's gone anywhere. Okay. If we look at the, let me pull up the study. The technical conformance guide. Okay. Let's see. Hmm. That's annoying. They changed the link around. Let's see if I can pull it up a link. Let me put this in. Yikes. Let me not put that in. Yeah, Friday to 13 moments are coming true. Yeah. There's something called the study data technical conformance guide. You can Google my the Google results are yielding some strange. They're echoing it with see this. Let me just try a different browser. Just to see what's going on because usually we can pull it as an FDA link. Let me see if I can do that quickly. But the study data technical conformance guide spells out the latest specs. Is it this one? I see one. March 2022. There's actually, I think in October one. Okay. Usually is. And no worries, Paul. I mean, if this is marked, well, okay. Ooh, I have to find it somewhere when I'm showing up as the 2018 one and that's. Out of date. I'll try to see if I can find the most recent one, but the study data technical conformance guide is updated periodically. It's usually every six months. Usually March and October. So the that determines what the is acceptable to go through the gateway. And how should I say this? I have been at FDA for almost 15 years. Since I have arrived, everyone says XP T is obsolete and should be transitioned out. However, 15 years late, almost 15 years later. We still using XP T. All we can say is, if someone wants to. We can use the, I think it's version SAS version eight XP T. And they're not necessarily stuck with the SAS version five XP T, which has been the traditional one that's been used. In other words, you can use it, but it's not required. Understood. And are you guys seeing more version eight. I haven't read up on the differences between five and eight. I think eight will. Oh, here we go. Yes, do you want to post that HESU to the group as a whole. HESU is conveniently found the March. 2022 one. I found October. Oh, you found October. So it's slightly slide update. Oh, is this the right documents that you're looking for? Let me check. Yeah. I opened it on my computer. It does say October 2022. So Lisa does the more. Yeah. So I think so. Yes. That's the most recent one that tells you. All the essentially. There's a revision history, et cetera. But it does say what's actually required. And on page six of this guide. It does say. Version five. It's a file format. So, you know, they will accept some later versions. Apparently they can't require. I see. Understood. Thank you Paul for the. The latest guidance then we'll. We'll try to stick with the requirements. But yeah, looking forward to see if we can, you know, maybe. Try some of the newer. The newer ways to transport. Right. I think it would be in the long term, it would be advisable. We can say. And hire. Pardon me. Oh, sorry. I just saw a chat from Doug. Okay. And hire. Okay. So. Yeah, so the systems. So far as I know, there, there have been several pilots attempt. To. Look at an alternatives to XP T. And so far. None of the pilots have been entirely satisfactory. So that. We're still kind of stuck with. The current situation with export files. Yeah, that's, yeah, that's good information to know. I, yeah, I think one of the things that I mean the limitations XP T now is like they. Was it the variable length names are like still stuck at eight labels at 40. And then the 200 limit character. So just wanted to see if. Whether. Alternates alternate sex PT could, you know, could be. I think the later versions. Expand those a little bit, but don't. I'm not 100% positive. So. Yeah, I. It is a. There is ongoing discussion. Let's just put it that way. I don't know that. Anyone has a complete solution that will cover all possible cases. So. Jason's 1 possible source. Essentially what many people do when they first get the data sets is they convert it to a easier to use format. Right on your end is saying when, when you, when you guys receive it. And you'll just convert it to something you're used to, I guess, so to speak, or easier to work with. Um, correct, even X. Even if you want to use it in SAS, you have to really first convert it to a standard SAS data set. True. So. That conversion has to occur. When could, when in some cases some, you can be converted to an R data set. And. I have seen it, although I don't necessarily recommend our CSV. Files. For data conversion. Um, the problem there is that people might try to open it said CSV files. And Excel. Um, and. Um, I noticed Eric is smiling. I've seen the pain that can happen in that, in that case. Yes. So, um, Given that that's the case. There are some issues of interest, but yeah. Appreciate your feedback Paul. Yeah. No worries. I mean, I, I think maybe. For a pilot three, we can, we can. Follow the. Uh, the technical guidance here that you've provided. Thank you for reminding us of that. So yeah, we could, you know, maybe. This could be a pilot four or five. Alternative transports house XBT. But yeah, thanks for just kind of having this discussion. I think it's helpful. Yeah. Right now, until we hear otherwise. We, we have to go with the findings or the guidance provided in the study data. Kind of says. Understood. Yeah, at least from my perspective, just observing this. I feel like we should keep a close eye on the JSON format progress as I think that will. Now again, this is Eric's opinion here. So feel free to challenge, but it is probably the best balance of modern. Yet. Accessible enough that in the different computing languages that we would use to consume these results, we would have fairly a fairly easy time of getting this into the rectangular format that we're used to when we actually look at the. Clinical data. So obviously our has many packages that go from JSON to our data frames. I'm sure SAS has routines that probably could handle it because of some of their modern tooling that they're, you know, marketing in their, in their product pipeline. That's just, again, speculation at this point. But I would hate to reinvent the wheel. If that effort does actually pan out. I definitely hear you're saying, Paul, I think there have been attempts at this. That have been mixed results at best. And, you know, maybe this is the more promising one that can at least be imported successfully in these packages or the software. And frankly, the modern software development world JSON is like the ubiquitous language to go back and forth between client and server side operations and the like, but that's a whole discussion topic for an everyday. This has been, I participated in that exporter or not exporter data set JSON hackathon that COSA the CDIS open source Alliance put on, maybe pilot for could be working with like Sam Hume and the COSA people to do a submission with data side JSON. As like a, or a continuation of pilot three to see if you could push push things along a little bit. Just as an, just as an idea for a pilot for pilot five. Yeah, I think it's not a bad idea. It's just, um, I don't know that the gateway is even set up to allow for JSON submissions. Hmm, that's some point. So, until the gateway is set up to enable those items. We might not want to go down that path. That would be a discussion with the E data team at FDA. And in the past, we've had Ethan Chen and some of his folks participate in this meeting. Yeah, so that could be an action for a future agenda topic to bring, yeah, maybe someone like from their team on exploring that as a possibility I do agree that definitely warrants probably its own pilot instead of trying to merge that with pilot three because I'm afraid that with the discussions involved and the change that will likely need to occur. We wouldn't want to slow down pilot three's progress for for that piece of it ends up being a very iterative thing that's just my opinion but Joe I don't know if you agree with that or if you want to think more about it. Yeah, that makes sense to me I mean, yeah, if Paul is saying the gateway is not open to it yet then, you know, we, it doesn't make sense to work on it at risk right now we can always extend it once we we have word of, you know, have it being, you know, receivable by the FDA. But yeah so yeah well you know once pilot three is done well we'll we'll follow XBT for now. But then, once once it opens up, we'll, we can reopen pilot three to extend it to then convert to Jason and then we submit again. I'm thinking that could probably be the easier instead of a whole new pilot or package. Yeah, that's a similar mindset as I have when I think about the container pilot whatever number that ends up being, we're just going to basically reuse the shiny app we did in pilot two and not try to reinvent too much at once just to isolate the key, the key points and the change so. Sounds good. All right, well thank you Joel for that great update and a great discussion I took a lot of notes for that for the minute so. Yeah, yeah, I'm trying to keep those best I can but as always when I send those out there any corrections it's all in the GitHub repo and you feel free to correct those. I'm glad you're recording it to me I forget. Yeah, you and me both yep. I'd like to move on for the sake of time. Doug had proposed that in this meeting we have a nice introduction to the repositories working group and some of the progress and efforts going on in that space so Doug I can turn it over to if you like to kick that off. Cool. Yeah, for anyone who's not aware we've had we've launched a new working group under the our validation hub that's looking at what a shared source for packages might look like. And so we just wanted to do kind of an intro session just to kind of introduce our team and what we're trying to accomplish. And I think that there's a lot of you know parallels between what our teams are trying to do so just to kind of set that groundwork so that if there are places where we can help each other out. We can kind of be in touch from the start. So I have a couple slides prepared they're pretty like bare bones, but I'll run through those and then we can just kind of have a discussion around, you know what what we might see as you know the next steps for this kind of thing. So let me see if I can share zoom hopefully this works. Can you can someone confirm the guys are seeing my slides. Looking good Doug. Okay cool. All right, so we have a few people on the call representing the team, I think I saw Kevin and Andrew also in the room. So yeah, as I mentioned this is a group starting to look at like what would it, what would it be, what would it, what would it look like to have a regulatory ready repository. And I'm going to kind of leave the repository word I want to harp on that too much because they're still kind of like open ended about exactly what what this product ends up looking like. And I'll get to that in a little bit. So here I just want to start with like what our mission is why we're why we're looking to this problem. And what exactly the problem is. So what we're hoping to do is support a transparent, open dynamic cross industry approach to establishing and maintaining a repository of our packages with accompanying evidence of their quality and assessment criteria. And so in a lot of ways like we have cram already which is like a pretty phenomenal resource that already kind of sets a bar for package quality. But as I'm sure like everyone on this call is aware we have a lot of internal companies specific process that then kind of get bundled on top of that to document this evidence and make it more reproducible and accessible. So you have to kind of like show that that level of quality. And if you've gone through this process in your own company, or if you can get just kind of imagine what that looks like internally, you know, at Acme co over here. They might be taking a look at, you know, the hundreds of packages that you might need to perform a clinical analysis, and everyone might decide that those all look great. And you go to do your submission, you know, like years later, or maybe there's like some kind of interim reporting event or something like that. And that's really your only opportunity to get a really direct feedback on like what what whether this level of quality is sufficient. And so if there's like really major concerns about how you're performing analysis or the your choice of packages or things like that that's kind of, you can't get those more at the tail end of your process. And in general, I haven't heard of a lot of situations where that's like been a major concern, but you know it does kind of push a lot of this decision making toward the tail end, which kind of makes a really long feedback loop and leaves a lot of uncertainty on the table throughout the whole analysis process. So what we'd like to do is have this be a more like open dialogue so instead of having this like long delay before you get feedback. This is something where, first of all, all companies have, you know, transparent access to this shared kind of collective body of knowledge about what level of quality, we feel each of these packages has. And then on the regulator side we hope that this kind of opens up a forum for having a more kind of transparent dialogue about which packages. I think hold up to some level of quality or might have like methods that might have kind of like implementations that might not be up to our standards or something like that so that if we get that feedback earlier we can feed it back into like open source community and hopefully improve those packages to the point where people are happy with them. And like I mentioned this, this idea of a repository you might be thinking of something like Cran, but we're not, we're not exactly fixated on that idea. It could be something like Cran where we have some additional like quality assessment considerations baked into it. And then like our universe which has a really awesome product, you know, for like just producing a cohort of more curated packages, or it could be something like, you know, going to IKEA and you pull off the shelf just like a set of instructions for producing a regulatory ready environment with a, you know, a curated set of packages that we've decided are of good quality. We're not exactly sure what this is going to look like and that's largely why we're, we're here is to start thinking about how do we start answering the questions about what's really needed before we jump into a technical solution. So, just to kind of get cut to the chase for anyone representing the regulatory end of things. We want to understand how you observe quality and get a better idea of what we can do upfront to showcase that quality so that it's not a mystery to us whether or not that's going to be something that you're interested in and us using for analysis. And then if you're an industry participant, what we're looking for is support and helping to kind of like draft these things, or intermittent feedback if you just want to kind of look at what we're producing, understand whether that would work for you in your company or whether you just think that, given your experience with interactions with health authorities if that's something that would support that process. So, anywhere where we can like leverage feedback that's really where we're looking for the most input right now. And just kind of paint a little bit of a picture of what we're planning to deliver initially. Like I said we don't want to jump straight to implementing something on a technical scale. You know, invest a lot of time and effort into building something before you really understand what the need is. From the start, it could be something as simple as just like mapping out the various quality heuristics that we could possibly surface in something like a shared resource. So that we all have the same kind of like understanding of the level of quality of the software so if we were thinking if I was framing this in the context of a submission pilot. You know this pilot really just could be a survey of like, if we had this information about download counts or, you know, test coverage. How is that perceived and what else would you be looking for so just starting to have those those conversations would really help us make sure that we're heading off on the right path. From there, we were thinking having something like a mock up of a portal would help to kind of make it a little more tangible so it's not just a checklist but rather something where you're, you're kind of getting a little bit of an impression for what it would look like to inspect that for each package. And if we were then thinking into the further future, if we wanted to put together a pilot where we hope to host these packages in a way that could be consumed by health authority in one in a pilot. We could spin up something like a static package server. If that's a direction that we end up paying in so that you could pull packages from an endpoint similar to Cran but that's more controlled, just so that we could see what that kind of like technical feasibility would look like. So I think that's my last slide, pretty short, but hopefully that kind of gives you a little bit of an idea of where we're at right now and what our goals are. And yeah, like I said I really want to leave the majority of the time for discussion so if people think have any impressions on the path that we're on and think that we could be doing things differently or want to be involved we're very interested in hearing that. So, I can leave it there and open it up the floor. Doug, in terms of how the group is structured like how often are you meeting and what what's the cadence for, you know, the collaboration within your group right now. Yeah, yeah. So right now we meet about once every two to three weeks. It's a little ad hoc at the moment because we just kind of passed this holiday period and we just started off in November. So things are still like kind of settling themselves. We have people split up into three different groups where we have kind of like people that want to be more industry reps that are really here from like an ideation perspective just trying to make sure that we're hitting the right mark. We have people that want to be involved technically, you know, spin something up line on a technical solution and then we have people that are more organizing. So, just trying to coordinate all the different efforts so if anyone feels like passionate about any of those different paths then we can use your help. And certainly on the like health authority side, thinking about what you guys would be receptive to, to giving feedback on would also really help us kind of understand what should be delivering in this early part of this initiative. If there aren't any like, you know, if that didn't prompt any kind of initial impressions that we can always I have one. Okay, I was a chime in I don't want to dominate the conversation but Well, is there overlap or synergies with what's been done in the our validation hub would say the risk metric, you know, effort. Is that going to be like a component of this or how do you think that fits into play here. Yeah, totally. So, I've been biting my tongue trying not to like really bury the lead with risk metric but I think that could be a really fruitful resource for us to start surfacing some of these metrics. But there's, I mean, there's, there's these types of tools all over the our world for doing this kind of like quality assessment so I think on the risk metric side we try to be really exhaustive I was really involved with risk metric before we kick this off so I'm very aware of like the breath of the risks that we can assess on the risk metric side. And it was really built with the kind of like internal use case in mind. So if we, depending on what this looks like for what a shared collection of packages might amount to. If it ends up being something that's like automated, then it might be you might be expecting different things from what we would run with risk metric. And something that's more like how do we take what we would have assessed internally and just make that public, then maybe risk metric is a really good resource for that. So, I mean we'll use the right tool for the for the job and risk metric is certainly among the strong contenders for providing a lot of that great tooling. So maybe there'll be a lot of synergy there. So those are good nuance you share there this is not just about when a company is or a sponsor is assessing which packages are used for a clinical deliverable and then getting that the convenience of risk metric to their input this is also looking at cross industry or cross partnership, cross of hta health authorities so I can see why this is could be bigger than just a risk metric offers but it could still be a key piece to this. Certainly I'm intrigued by where this goes. So, if someone does want to get involved is it just contact you or what's the best way to. Yeah, I'll trap a link we have a repository that where we do all most of our discussion so we have a sign up issue that's pinned there so you can just like message there and we'll get you added to meetings and stuff like that. And from there, we can start. Yeah, primarily want the all as much of the work to be async as possible. So even just jumping in and starting to express opinions on issues is probably the best way to start. So yeah, definitely get involved if there's any interest, and I'll also say that like there's, there's other peripheral elements to this as well so within our consortium there's also a repositories working group that's thinking about how do we improve like the transparency of So if we do end up going down that path and I think there's lots of energy there to start saying you have worked with that team to figure out what are these kind of like core quality measures that we want to see and is that something that might be in the future for just like a agnostic repository. I think that's, that's some pretty pretty big thinking for the early start and I'm not sure that, you know, that's necessarily where things will go but I think that would also be super cool so there's a lot of different avenues that this could take. Definitely I was thinking about that those efforts as you were talking through one of the points you made earlier and your overview is the idea of transparency I think as long as that is at the forefront of like how, how say these metrics or these quality assessments are performed. You know where is the latest updates what's changed maybe in from like a month or previous release of a package to now. I think that as long as you adhere to or the group adheres to those principles, I think this has a lot of promise so certainly feel free to drop a link to the repo and the channel be able to throw that in the minutes. When we send this out for people to get involved but it does sound really promising. And if I can maybe just pose like one targeted question at the FDA folks is the what I was proposing like this, maybe like list of quality measures that we might be able to include as something like a quality report for a package. Is that something that you would be able to get feedback on. Do we have the right people in contact in contact with us to be able to like make that kind of a just kind of first pass doesn't even need to be like a official FDA opinion but you're just getting in some like general ideas around what that would look like are those people attending these meetings or do we have the right kind of like connections to get that type of feedback. Those people don't really exist. Okay, so that's also good feedback. So what would it take then to start just forming some kind of consensus around what, what measures of quality you guys might be looking for. I don't. How should I preface this. There are certain issues that the biggest concerns are probably that FDA it has our around security. Okay, you know where are there any things that will potentially compromise the system. So I one instance that I've seen cited. I don't know to what extent it exists is Haven supposedly can you can run in a buffer overflow issues, but I've used Haven without problems for at least a couple years and not run into those issues so how big of an issue is it. I don't know. So there are some things like that that would be of potential concern. That's more with the security folks. As far as I know, there is no real attempt made to independently gauge the quality of a our package. I mean, basically, what we've been pointing people towards is the our validation hub. Okay, you know, basically saying, look, if you're, if this is a concern, here's, it's not perfect. It's not doing an independent replication of the algorithms, but this is a reasonable stab at the right direction. So to that extent. That's the primary aspect. The fallback position is essentially if you look at the every splash screen of every computer program. It seems essentially you're using as is software. There are no, you know, even software that companies tout as being validated and tested. If you read the splash screen. It's, you're using as is there is no implied guarantee of quality. I mean, that's the cynic of me says nobody is guaranteeing quality at this stage. Yeah, and I don't think I mean guaranteeing quality is definitely a high bar. So trying to reach that I think is going to be. I'm not sure that that's realistic in the, especially in the context of something like our where the landscape is so vast. But what we can do is think about like, what, what qualities can we put out there that at least give us like a reasonable level of confidence and some consistency to, you know, some kind of like heuristic that we use just for measuring package quality that would at least give us like a baseline so there I'm not saying that we're going to like eliminate every situation where heaven might be able to encounter a buffer overflow or, you know, whatever it is, things like that, I think are probably going to be unrealistic for us to try to catch in all cases, or even most cases if just for the potential of those types of issues. So what we can do is form that like central forum for expressing these things so that when you identify that like, you know, there's a buffer overflow issue in haven. We can maybe have a way to record that in a central place so that that becomes a priority for like pharmaceutical companies if they need this package. So contributing that fix becomes a priority for them, because otherwise it's going to be a bottleneck for their submission process so if we can do that then and at least make it more transparent I think that's really the ultimate goal. So maybe we can, what I think would be great is if we, I mean I've already learned a lot about like where your priorities are in terms of assessing these things so if we can put together just like a list of what this might look like. And again, even just like broad strokes feedback about it would really kind of help us set that direction. I have a question to I'm sorry I don't mean to interrupt. Go ahead. Yeah, please. I feel like every conference I go to always has talks about this kind of topic like validation risk assessment. I don't know if there's a way to in what you're doing kind of correlate all those efforts just. I don't know I just hear a lot about auto validator. I think that you guys at Roche have like a GSK we have our own validation process now that I think has been talked about at some conferences it's just like a little. I know we're trying to converge to something that all the pharma companies can can use but I guess I get kind of confused about what what is all the hot topics right now and it seems like there's always these tools out there to show you like the difference between the different packages the risk metrics of the different packages like you know like if things are. If there are tools that are out there and they're like actively being updated to maintain, you know that that that would be good to see in a list as well as like kind of, you know, different topics that have been discussed at different companies using. Validation processes and yeah be helpful for me because when people talk about this it seems like it's always the same. It's similar topic every every time and we're getting there I'm feeling positive about it I just sort of. I get confused about which one is the hot hot one right now. Yeah yeah yeah I I feel that as well. So actually one one things I didn't kind of give this history lean into this just the interest of time but the valid the our validation hub just recently this past year ran a series of case studies where we have we've had like I think eight different companies share their internal validation process so and we put out a white paper after that that tried to find the consistencies between all of those and the differences among them so we do have those resources to try to standardize a little bit, but those are things that are occurring under this assumption that we're all internally doing enough, which is is kind of like this unpleasant like looming fear that there might be something else that we should be doing or maybe we're doing too much and we're just like bogging down a process for no reason. Ultimately this is now trying to like kick off this dialogue with health authorities instead of just being assumptive about what's necessary, and pairing that list back or bolstering it with other things to make sure that it's hitting these notes that health authorities are looking for. It's that white paper it's just on the our validation hub. Let me, I'll share a link with Eric after the call I need to look it up where it was published. Okay. Yeah, no I'd be very, I'd be very interested in that. Yeah, can do. Cool, thanks. Yeah, I think I think that's all the time we need I know we went a little over Eric so I don't know if you had other agenda items but this, this word perfect for me this was kind of the last major item because I don't have a whole lot to share yet on the container pie whatever number that was up being I haven't had a chance to do a lot of digging just yet so not a whole lot on that. But I'll, with the time we have left for a few minutes. Anyone else have any questions or comments about these various efforts or other things going forward. I do have a question on the website for this group. Yes. I noticed I'm helping out Joel with the pilot three and I noticed ours is get isn't updated is it can I do like a PR to get it updated like just with some loose language about what we're trying to do. Certainly, I'd welcome that and I'm going to try and refactor that site and some little easier to maintain in fact I my eyes on doing a quarter site out of it. So, certainly feel free to send me whatever notes you have about the effort description and I'll at least get it updated in the current format, but look for that site to be a little bit different in the coming months because I want to make that bit easier for both Joe and and I to maintain going forward. Okay. Yeah, thank you for that. All right, other questions or comments before we wrap it up. All right. To the new year and yes, absolutely great to see you all for the new year and like I said I think we got a lot of exciting efforts on their way and certainly we'll be keeping in touch regularly so I hope you all have an awesome weekend and we'll see you back in February. Okay. Thanks everybody.