 Well, welcome everyone to the November submissions working group meeting. So let's begin. And I guess there's been a lot of correspondence. So who would like to sum that up. Eric or Ning do you do you know the status of where we are in response to Paul's request. So I have been looking at that a little bit admittedly I was a little late looking at it I just finished the shiny in production workshop I gave it our pharma yesterday so I'm catching up on the chain here but it looks like the comments and Paul you can correct me if I don't understand this correctly it looks like there's a couple of minor ones with labeling. It looks like one of the tables had a reverse label looking like you'd like to see an additional table as well and sounds like something with decimal places. So they all seem fairly minor just need to address them, but it sounds like at the at the minimum. It's looking good from your side over in those comments right. Yes. Okay, so the primary table had the header labels for high and low dose reversed. Okay. Okay, we will fix that for sure. Sounds like it may have been when we put that code and generate the tables that was like at the very early stages of the project so it's possible that just didn't get updated when that was brought up in pilot one as well. Yeah. Now, forgive me if this is in the chain and I missed it but then did you all test like the deployment instructions when you reviewed the app or are you reviewing the app that was extra. I was looking at reviewing the app that was posted on the R studio server. Oh sure. Okay. All right. Yeah, I had not I'll be honest and say, I had not tried it out yet. Okay. I was providing feedback on on the basis of what I saw there. Yeah, no that's that's that's great. And I just want to make sure that if there are any issues with following the ADRG instructions that we're able to, you know, be ahead of those as well while I'm making these updates to the to the app and the underlying tables that are generated here. I had not tested it. It's possible. Hey, Sue may have. But I don't know. Do you want to weigh in. Hey, Sue. I haven't tested it out. I think I'm going to start reviewing the app once you guys complete the submission. Okay. Yeah, we were kind of trying that to follow the same model as before where we were approaching it kind of as we would. If it were an actual submission, where we wouldn't have any a priori information so I was doing more of the. A little bit more of this initial stuff. Okay. Yeah, that's perfectly understandable. I think guys, I guess Eric from our side that once like we address the comment that we can do the actual gateway submission that maybe power and he so can do the testing. So, okay. So, yeah, so I think you've addressed at least have a framework to address the concerns that were raised. And I assume you guys had no problems with those seemed reasonable and appropriate for this circumstance. Just one clarification on the additional table is that the remaining patients at the end of the, like the primary treatment phase of the data. Is that what you're looking for. Yeah, it's kind of how the dropout proceeds you can kind of what's interesting. I don't normally say tables are interesting. But there was an interesting different, what appears to be differential dropout. So, that's the I guess that's the kind of table that I think having a shiny app work with is is a little bit more. Where it's an exploratory analysis is of greater interest in utility than the pre-specified. Okay, so we could give it a new tab in the app, maybe we call it either a disposition table or some similar to that effect and basically mimic. And a somewhat more polished way the little screen grab that was in the email on the two doses and placebo at the various time points. Okay. Yeah, that all sounds doable to us. So I will make sure to get get the updates in there and then I'll probably redeploy it on the on the hosted app so you all can take a look. One more time and then we're firing any additional comments we can go with the ETC the actual transfer of everything. Sounds great. Okay. And we set a date for this. What do you think it'll take you in terms of time, Eric. I can try to address this in some of the downtime next week in between the conference but I would like to maybe at the concern of estimates by end of next week at the as a deadline to try and get this sorted out. Maybe sooner. I'll go with the one that doesn't cause you too much grief and and see where that takes us so. Yeah. Yeah, with with the conference going on and then veterans day, do you even want to give yourself two weeks. Well, I won't turn that down. Yeah, we can try to be stressed in case. I'm trying to be. Yeah, because I think people will want to, I assume you want to participate in the conference and I'm heavily involved, especially the backstage stuff. Yeah. Right. So I was thinking that I'm telling folks, my week next week is mostly taken up at this point. Yeah, yeah, as well. Okay, so maybe we try two weeks from today is a Good estimate Friday the 18th. Yeah, I was thinking the same since I actually for Thanksgiving people may take time off. Yeah. Yeah. All right, so if we do Friday the 18th when you're done. Then what will we need from there on we'll need to have people look at it and so what what's wins are actual, you know, long. We'll, you know, launch date then if, if Eric is done on the 18th. People will need a few days to look at it. But that few days takes us right into Thanksgiving right. So the following week. I think every since I feel like the update are pretty minor maybe like we can just have a small team like myself and home and etc doing testing so that I feel like it's doable to even get a testing down by if we carve out some time on Thursday. So maybe we can target that having even having the testing down by Friday. Yeah, I mean, yeah, in totality, these are not major changes. And, honestly, if not for the conference it would be sooner than we could test it but I think yeah we can test ourselves first to make sure it's meaning the comments and then that way that stream we can do a couple things on Paul and who sees side as well. So does that mean the submission date is the 18th also. I'm thinking if we can target on 18th maybe that's that would be good before people go to the good for the location yeah. Right. All right so we won't set a hard date for when Eric's done but assuming that you you're all in communication and things are going well. 1118 is the submission date and that means that hey Sue has to be around on the 18th does does that look good for you. Yeah, that sounds great. Oh, good. Yeah, like things said and we know with people's end of year schedules it's it's going to get sporadic after Thanksgiving and obviously preparations for December as well so it's in the interest of certainly us on on the developing side of this to get this out of the way for the holiday crunch hits. Right, and then the next meeting of this group is scheduled for December 2nd so maybe there'll be some results by then. Yeah, maybe so. Joe do we want to put a placeholder on the 18s. Just to block everybody's time. So you mean we should set a meeting for the 18th. For the submission meeting. Yeah. Okay, I can do that. The time should I put it. I mean, the same time as this would work for me. Okay, a lot of others. Yeah, it works for me as well. All right, I have one kind of like overview questions so Paul, is there an easy way to like describe conceptually where you came down on drawing the line between a little bit of interactivity and like too much interactivity. Sure. So I think we can distinguish between exploratory analysis and inferential analysis. So the problem with inferential analysis is the usual problem with p values. There are groups in cherry picking. So that's one area. And there's even there's controversy within the agency to be honest. So, the direction we were told are, we could say is that exploratory. Interactive analysis were probably not the way to go for exploratory analysis. If you think about it. The Kaplan Meyer curve that we're suggesting is essentially a safety result rather than a primary efficacy. And there's a difference between exploring for safety and the inferential statistics. So, but a Cox model with an adjusting Kaplan Meyer curve that would be inferential. Yeah, the overall pattern, if you look at it. Is, is the, I would say is still exploratory. It's essentially it's an overall safety analysis. Okay. We do not actually quote a p value, or you guys do not actually quote a p value in that Kaplan Meyer plot. It's basically just confidence bands. Admittedly, that is a form of inferential statistics, but I think we can put that in a slightly different class. Right, because that's not exactly saying that, you know, the treatment groups are significantly different than placebo. It's just more of a visual representation, but we're not making claims on that particular plot. Um, so, um, yeah, there have even been recent. One of the recent ACs. That was somewhat controversial for McKayna within the last couple weeks. Centered in part around. There wanted to have a previous study, which had a subgroup of one population to retain its indication. And the advisory committee overruled that. Which is the type of sub, I guess the point being that there was a type of subgroup analysis that the advisory committee said now we, we don't believe we won't go along with that. So do you do you think this this kind of guidance that you're just giving on the interactivity. Do you think that's going to stand for a long time or is this just a matter of it being assimilated into practice. Um, since it's consistent with the ASA statement on p values. Um, I think it's probably going to stand. At least, go ahead. Or at least that's that's my interpretation. Again, that's me speaking, not the agency. Right. That's only an opinion. Sorry. Sorry. I was curious, then like, I guess for the pre specific subgroup analysis, like there is, I assume there are less controversy right so do you think like, like, like putting kind of like making pre specific subgroup analysis, like, easier, like to navigate using an interactive tool will be a kind of like a more tangible goal for this working group here. Yes. If it's pre specified. You know, if it's pre specified, I don't foresee a problem. Thank you. So basically, if we are thinking about them, like more complicated, like maybe shiny use case in the future as science is kind of still like align with what has been pre specified in the SAP. Like, it should be less controversial. So if you allow people to do things outside of what has been specified in SAP, then that could bring some concerns right. Potentially that or at least that would be of potential concern. Thank you. That's interesting. Some nuance here where as in a typical submission after all the data and the TFLs have been transferred on the reviewer's side at your end, Paul, that it would be quite possible that the reviewer is going to slice and die certain things to look at, you know, what's behind certain findings on to for their own understanding. Obviously, no, no one on the sponsor is going to get in a way of that. That's all part of the process. But if we provide a tool that makes that part easier. That's a different story. It's I mean, maybe I'm being a little too reaching here, but it does seem interesting nuance there where a sponsor helps when making an app that makes every view easier. That's not, you know, that that could be misleading. I don't know maybe I'm reading too much into there's there's been controversy. Let's put it that way with subgroup. There is a white paper that came out about subgroup analysis five years ago and there is a guidance in the works is my understanding. So, I think I don't want to do anything that would suggest that we're endorsing anything contrary to what is a semi official statement. Yeah, that's good context to have and I'm pretty familiar with that guidance as some of the previous work I've done, and in my day job that Lily has been looking at optimal ways of doing subgroup analysis to make it a little more statistically robust so we've been watching that space quite closely. And I think most most of the time reviewers will start with the SAP where I think things can sometimes differ is they also want to test model robustness. Whether how are robust of the assumptions, particularly if some things are marginal and or controversial, but I'd say the ASA statement on the p values would be one that it's not an official FDA statement but it does highlight best practices as endorsed by a professional organization. Is that white paper you referred to Paul is that available for everyone. I'm trying to I don't recall off the top of my head. I haven't looked at it in a number of years to be honest. Well, it seems like we have some clear guidance, we're in good shape. The 18th looks reasonable so so this is all exciting. Do we have more discussion today. I see Joel is here Joel do you want to say anything about what you're doing with Thomas. Thanks Joe appreciate the time yeah just a quick update and so for for pilot three as everyone knows from the proposal this is just somewhat of an extension of what has been done for pilot one. We actually just had our team execution like kickoff meeting this past Wednesday. And it was a really great kickoff meeting we kind of got to meet all those who volunteered for for this pilot, and also kind of assess to see which out of the team, the people who volunteered. So we'll actually be like part developer and really contributing hands on versus those who may be in a supporting role, or an observation role so then we can kind of get a clear view of how to assign the work. So yeah, and with that we we kind of just, you know we had a really great introduction of everyone's background and where they see themselves and working on this pilot. And so without see we created a team roster here there's roughly 13 people, including us who kind of wanted to be a part of this. And with that we, we discussed the kind of the scope and purpose of our work. So that's where everyone was aligned there. Essentially, what what we plan to do with pilot three is try to resubmit everything that was kind of done for pilot one but instead the the Adam piece is the biggest difference and that will be using our to generate the And, and Thomas of course is that the product owner of the package Admiral and everyone will be. We'll have like a quick workshop meeting or a follow up meetings coming up just kind of make sure everyone's aligned with how to use Admiral and the dependent packages I go along with it, in order to generate the atoms. And then next steps with that is then to identify those people who want to be assigned to the atoms that we need to generate. And in this case, the ones that we want to focus on are only those that were used to generate the TLG is T laps that were done in pilot one. And just doing a quick research of the those TLGs that were done in pilot one roughly there about five atoms that were used for those. And so for instance, and ADSL I think there was a cognitive analysis data set ADS labs, and then there was a time to address event. TLF so out of those, I think five data sets will be then focused on to, to generate for this pilot three. So, what we'll do is once we generate the atoms will then do like a diff DF check and against the CDIS pilot data just ensure that what we're developing using the Admiral package still matches up what was generated in the CDIS pilot data. So that'll be one of our validation steps, and not doing any, you know, QC program from scratch yet. We just kind of want to focus on the, the first line programs first and that is just being able to develop these, these atoms in our In addition to that we felt like it also made sense to add the step of getting the programs that were used in pilot one to generate the TLGs and using the our atoms as source to re input into those pilot one TLGs to ensure that the our atoms still generate the same results as what was produced in pilot one and submitted. So, we're, we're, I think we're we're underway as far as what our workflow is going to be. And we're now just kind of setting up the end thanks to you Joe for giving us the access for the, the repos to work in for this pilot three so we're starting to set issues for each of the tasks that need to be done and having people kind of self assign the tasks they want to work on and we'll see what else is open as far as tasks are, and then make sure to, to redistribute the, the work in case there's too many people wanting to do one thing. So yeah, but of course we'll we'll discuss that in our, our follow up, and follow up break away sessions meetings that we have set routinely. Thank you that was a wonderfully coherent detailed summary, but off the top of your head and you planning to have put any of that on the website, are you going to have. Yeah, absolutely. So yeah, I'll work out a plan to get that. Sorry, this is kind of so new to the, the consortium here so but I'll, I'll make sure to to work on getting it up there. And, and a list of the you know who's involved would be helpful to that all. Absolutely. Yeah. And actually some of our summer volunteers asked, and I don't know this, this would be up to you. We noted that there is this this our consortium meeting on Fridays, though Thomas myself and and lay who's also a part of the pilot we're on here. So I'm asked if they could actually join this meeting to listen in on. Is that okay with the group or otherwise we could just keep Thomas myself on and we can relay any messages back to them. No, it's perfectly fine to have people listen in. It's awkward I have to hand do the invitation list but I can do that if you send me list of emails I could add them to the list. By next time. So that would be fun. Okay, sounds good. Sorry to add work to you. I don't want you to have to do that. We wanted to be as inclusive as possible. Appreciate it. Yeah, I have a question. I'm not sure if we have present the whole package component to FDA folks already for the palace three, whether we want to, you know, get some agreement on that before you proceed like we may want to package from this CTM Adam and then final PLG with the readable program. Have we done that yet. As far as the discussion of what we plan to deliver. Right. I don't believe we've done that yet that we may have highlighted it in the in the proposal. I mean, I think again, the plan is to my initial thought was just to kind of resubmit what was done for pilot one in that ECTD to the FDA where the only difference again is just the atoms being generated in our as a source rather than the CDIS pilot data. And to my knowledge, I think free submission to like we normally wouldn't. I guess since we're using the CDIS pilot data, the only raw data we have there is just the STTM. So not. Oh, you see the documentation. Maybe, maybe this is like Paul and he's actually on the call maybe just wonder from Paul and he's so just from Joel's description that if we resummit pilot one but generating Adam using our do you have any concerns or questions in terms of the scope here yeah. I don't think so we, we only pilot one really just dealt with, um, essentially analysis data sets, correct. Yeah. So, are we sort of ex proposing to expand this to do SDTM and then traceability from SDTM to kind of an, an atom data set. Exactly. Yeah, so yeah, go ahead. No yeah thanks thing for posing that question of Paul. And so yeah Paul, essentially, pilot one was just the generation of TLG is using our way then the source data set was just coming directly from the CDIS pilot data. But in this pilot three case yeah it will be the same package that was given to you for pilot one, but now the development of the atoms would be using our instead. And so the, um, will most likely also give you readable code and are for these atom data sets as well for your review. Whereas in pilot one that that wasn't given right name. Yeah. Um, if I may ask a question. We were talking at one state. This may, I may be confused. So, um, set me straight. Um, we were talking about pilot three being the container option. I assume this is not a container option. Yeah, we push containers back to pilot four. Yeah. Okay. Okay. So I missed last month's meeting. So that could be it. So, okay. So, um, that sounds reasonable. Um, being able to do the tracing and starting from one place to actually develop. I think would certainly be helpful. Quick question, which packages are you thinking of use? Yeah, I can, if I can share my screen with the. Okay. Sorry, my internet just went out in and out so. No way. Okay, okay. So you're, you're thinking of using the tidy verse rather than data table. Right. Okay. So medical or admiral exporter meta tools tidy verse. Okay. Um, I think that'll be fine as long as we can. Um, No, which packages, which version of our, which package versions would need to be set up a priori. Yeah. And that's, I think that was also part of our. Our kickoff just ensure that everyone was running on the same environment. So yeah, we'll, we'll make sure to, to provide those details. And then the submission. And I assume there won't be any issues with. Size of data sets, et cetera. I mean, this will be something that can. Be run on a standard. It won't require any special computing facilities. I assume. No, yeah, we're hoping not where we're planning just to kind of use the same, same tools as was done in pilot one. Okay. That sounds reasonable. Thank you. And poll. So, I think the reason we want to provide the traceability from. And then to Adam is we think FDA might want to do some testing on our package. So if we have everything ready in a whole package that is easier for you guys to. You know, grab the data set and run. So is a dis assumption correct. If you guys want to do some testing on the package or. We could. Just to clarify a question I've got received. You guys were going to do the XP T for format files for the data sets. And then would presumably. Use Haven to transfer and output or something like that. For the traceability. Never heard of Haven before. Oh, I'm very familiar with Haven. It's basically the package by having me welcome to import says they just says directly into our. Gotcha. It's part of the tidy verse, I think. Okay. We can look into that. That's just a question was more of a question. So we'll start with XP T. S D T M S XP T and finish with Adam XP T. Okay. I think that's the, that that's the plan. Yeah. Okay. Yeah. That sounds good. They're. How should I say, I don't, you guys can confirm a couple of years ago had Lee had expressed a concern. With the output. Presumably that's been. Addressed and solved. I'll have to take a look at the issue tracker for Haven, it's been a little bit, but. Haven for those that may not be aware is actually wrapping a C library by someone else to help. And that's just trying to reverse engineer and assess them and be that format to get into our, and. I know there have been some esoteric issues with some use cases, and I think there's been some active development in the C library. Side of things to shore some of that up, but I definitely would like to pursue that further because. Honestly, that's one of the biggest pain points for us, even internally is to deal with situations or haven doesn't work by default on some of these. That's, that's another story for another day, but. I'm trying to keep close to the best on that. Okay, was that XP T is okay, but assess seven be that is causing problem. That's my impression, but I have. Yeah, most of the issues that are identified are on assess seven be that format I think XP T has been a solve problem for quite a bit now. Okay. The issues now and it's still open on the SAS seven be that. That won't be an issue on our end I don't believe. And just having delivering the Adam see you and XP T would be sufficient. Let's see. I guess, I want to say yes, but I think we, we want to. We'd want to see a written proposal. Yeah, I think, why don't you guys submit a written proposal so that we can review or go away and then give you guys any data or concern or thoughts. Yeah, that would be the best way to lock things down. So let's make sure the appropriate level of detail everybody understands what's going to happen. And it sounds good. So essentially, this is then a proposal on just how to allow once the data gets to you, Paul. In case you just how then you'd be able to unpack the package and then be able to run on your end. Am I understanding that correctly. Okay. Yep. Got it. Yeah, in that case, yeah, we'll go back to the team and then assess how those steps would play out and then ensure that in the proposal we provide some not too many details but just some instruction on on the high level overview of how those steps would look like. So thanks for the feedback. Do you think you could have that. By the 12, what's at the 12 one meeting would be nice to go out the year with, or if not in January wouldn't be so bad either what we should have a date for that proposal. Gotcha. Gotcha. Yeah, let's let's or weeks. Yeah, we can shoot for that I think that's a lot of time for that. Thanks Joe. And then into your question Lee. Yeah, thanks Joe, thanks Paul. Okay, take this opportunity to ask a question. It's kind of different from pilot three purpose but in general I'm not sure what FDA is point of view to have a filing with, you know, data set created by SAS, but TLG is created by our is a possible. Is there any possibility we accept this mixed, you know, package filing always depends on, you know, really particular filing team. To some extent, it will depend on the filing team, but officially FDA software agnostic. The reality is, individual reviewers may do some of what you have just outlined. And in fact, we have had analysis done in SAS, and then the graphs for the label that actually went out, we're done in our, or we're made in our I should say. So, there are actual examples from our end of those types of things. And actually, my understanding is that if you were to open the package insert, you would see some of those graphs. So obviously there's, there's lots of ways to mix and match multiple languages I mean, I mean, different cut off points. Is there any particular like boundary line that provides concerns about traceability. So you know when you have the data generated one way and then analysis on the other you have to really be sure that you're doing the data translation right, which I suppose is not that hard but conceptually are there any difficulties. I think we do ask that the sponsors use relatively standardized software that we can follow and is available. Like something like Haven would fit that. Right. So what should I say. One of my colleagues once made fun of a sponsor saying they were so unaware they tried to use Excel. Yeah. Okay, so I'm not saying I'm the one who said that. On the other hand, our colleagues in CDRH do not require. They will accept Excel submissions in some cases, because they're dealing with a much wider range of companies and products than say we do on the drug side. So I would say most standardized statistical software tools could be employed. We would prefer not to have a whole collage of tools, just because it makes it more challenging to trace. But I think definitely if someone were to use an office productivity tool, rather than a statistical tool, we would have some concerns. Especially if that data file has a date column in it or a column that gets transferred to dates as we've seen in the past. I'm kidding. Yeah, so. But yeah, that is the one case where I know of someone actually saying something along those lines of saying it was not something that was traceable. I think in general, we would want something that's traceable, which typically means using a scripted language, as opposed to a GUI. Okay, we're coming up on 10 minutes to the hour. Does anybody else have an update they'd like to provide. So he's one of myself will present in Aria pharma next week about pilot one. Yeah. Most excellent. I hope there's thousands of people there. Last I heard there are over 1500 registered so we'll see how, you know, how much translates are showing up that if last year's any indication, we're going to have a very good turnout this year's event. When is your talking. It's Wednesday. I think Wednesday. It is Wednesday. Yeah, that's correct Wednesday 11am Eastern time. Yep. All right, we had a really thorough discussion today we have lots of action items and we're getting so ever so close to this pilot to submission, it looks like it's really going to happen. It's going to be nice to sit around Thanksgiving dinner and knowing that it was done so. Let's see if we can do that. I was checking my notes for kind of one hour submission day was November 22. There you go. So we have some tradition to it here somewhat unplanned tradition, you know, line up serendipitously. All right, so. Joel, you know when pilot three is going to be then so. All right, well, we'll keep we'll keep the tradition going I guess. Hopefully, yeah. Well thank you, everyone. And unless there's anything else let's, let's get out five minutes early. Thank you. Thank you.