 But let's start today. So I think, Eric Pernink, why don't you start it off with where you think we are. And then we have Paul and Hesu here. Yeah, thanks Joe. I think, yeah, maybe before we left off in the last meeting, looked like Paul and Hesu, you had maybe some comments you wanted to share about the PIO2 submission that you referenced a little bit earlier in the year. But maybe if you have more detailed comments on how we can improve things or the overall process, we'd love to hear it. Sure, yeah, I have some updates. I talked to Paul yesterday, what we need to bring up. Maybe I can share my screen and show some things here. Sure. So some of the issues that we identify is like, I believe we agree upon Kepp and Maya plot is interactive and other cables are static. But depending on the filter on the KM plot, for example, if we remove the plus table, it changes the other table. It's affecting other tables as well. So as you can see, the plus table, and then it's giving an error because of the number of column provided in header string does not match data. And then even if it gets to data and it's showing this. So we identify that depending on the filter in Kepp and Maya plot, the results of other tables are changing. So maybe that's the thing to be corrected. Yep, I will definitely work on correcting that. Thank you for letting me know. Sure. And then maybe a question from here. Sorry, I feel like that may be intended for the TO framework because people oftentimes want to compare different tables like using the same filter if they're looking for a specific subpopulation. I think when they design this framework like before, like basically there is a filter at every table. And then basically if you want to look at the same subpopulation, then you don't need to do the click-through. But I think in this case, if we don't have the filter for the tables, I feel like this behavior doesn't really make sense here. Yeah, I'll have to see if there's an easy way to turn that off for the other tables. Because I agree, that's definitely the default that I've seen in my experimentation. So I'll have to consult the docs, so to speak, for TO. And see if I can find that. Let's take a step back. We disable the interactive plots for almost everything as a kind of special request for this pilot. But, Hey, Sue, for if you were to have everything enabled, would you think there's a feature that Ning just elaborated on makes sense? Is that the way you would want things to go? Maybe Paul can chime in. But as Paul mentioned, we don't want to endorse subgroup analysis and p-value or changing depending on the filtering things like that. So Paul, do you have any more comments that? We, Teal, as was pointed out at the R&Farma conference, has some caveats associated with it. And we do need to be careful of subgroup analyses. I think we touched on that previously. We have no issue with pre-specified, but we do have concerns about exploratory analyses suddenly being thought of as significant in some sense. Maybe to follow up on questions. So if that's like in the future, if we have a shiny app where we only deliver pre-specified analysis and there are maybe two graphs and then there is a filter on the right hand side, you can choose ITT or a pre-defined subgroup population. Then if in that scenario, would you prefer to have the filtering like adherent across different tabs, or you would prefer to do the click-through from tab to tab? It's hard to say. Part of it is I guess we need to distinguish between exploratory analyses and ones that are used for inferential statistics. The problem with exploratory analyses that produce a p-value is there's an inherent crossing of the line between exploratory and inferential under those cases. And that's the real concern. And there have been controversies in the recent past that boil down to focusing on one subgroup in one study as evidence for approval. So our concern is distinguishing between exploring data and using it for inferential decision-making. Well, Paul, if I hear you correctly, then are you saying that you would never want an exploratory capability in something that goes through as a submission? No. I'm saying I want nothing interactive exploratory that produces a p-value. So if we hid all the p-values, would that work? We can have all these graphs and no p-values? That was our initial response. Our management wanted static. Still isn't. All right. So I see you're fighting for some wiggle room here. But I think we could just not produce p-values anywhere. And then could we have all the plots interactive? Can I, Paul, before you answer that, can I just add one thing to Joe's comment there? Does that mean that we also shouldn't have confidence in tools? Effectively, it's the same information in one way or another. It's not emotionally charged the same way. That's true. Yeah. I'd say there should be caveats listed. Here we have the bar. All of this is exploratory. Right? I mean, yeah. So I think everything has to be viewed as exploratory from that perspective. A confidence interval, as Joe was saying, does not have the same. Essentially, we're not doing a Bonferrani or a Holm-Hockberg either on p-values or confidence intervals, right? But if we were to do this for an actual inferential process, we would want such a thing to be considered. Maybe another question. You mentioned that if it's pre-specified, then it is OK. So I'm thinking in this scenario, if we have ITT and subpopulation as co-primary endpoint or primary and secondary endpoint, in that case, subpopulation analysis is reasonable, right? But for any undefined subpopulation analysis, that's questionable. Correct. So there are defined subpopulation analyses based on age, sex, ethnicity, race. So there's some demographic ones that are pretty much required by law. You'll notice in many cases they might do even confidence intervals, but they won't produce p-values in what we see as part of a submission. The issue is that trying to say, we'll do a subgroup analysis. Essentially, it can be a form of p-hacking. And there's a great deal of sensitivity to that within our group currently. So I guess to move forward, my take is that for this particular submission, I think that if Eric can figure out how to break the dependencies between different types, I think it should be good for us. But for the future, if we want to further explore the usability of interactive tools, then you will be safer or you will bring less controversial if we limit it into pre-specified subpopulation analysis if we want to have some interactivities over there. It depends on how you bundle it. So I'm thinking of the line from Spider-Man with great power comes great responsibility. And that's a concern. When we have powerful tools, we have to endeavor to ensure that they're used appropriately. Look, can I add if? Go ahead. Thank you. Looking at this sort of exploratory analysis, I don't consider any subgroup analysis by age, race, think, and all those that have been specified in the SAP has been exploratory. They were looking beyond that. So if they are in the SAP, I guess there's less concern about producing confidence intervals or perhaps even p-values. Just say, often they're not presented with those subgroup analysis. But the issue here is that if you go beyond what the SAP states, isn't that what this application should be dealing with? For one principle reason, I would say. So say that you don't end up with conflicting results or different results from what you submit in the main dossier. I'm not sure I entirely parsed what you were saying. So you're proposing, if I understand you correctly, to say that if it's done on the basis of demographic information, that should be OK. But if it's done on the basis of other information that might not, that would be a different sort of issue that would have to be discussed and resolved. I think the demographic analysis is part of the SAP. So my distinction is, has it been pre-specified in the SAP, perhaps a protocol as well, or not? If it hasn't, then for me, that branches out into exploratory. So what's not being specified, I'm assuming, is what we intend this application to deal with. Not entirely sure. I think if it's pre-specified in the statistical analysis plan, we have no objections. Or at least I have no objections. But on a positive note, Eric, we have already fixed the problem. So my suggestion is, if that's the case, we can declare a victory and move on. Yeah, it turned out there was an extra parameter that I had not changed from the default in the rest of the modules that I'm running through my battery test now, but it seems to have solved it. So at least for this case, we have a path forward. But I think the discussion has certainly been enlightening for the future, because I think this particular topic is arguably one of the largest, I should say, one of the most important things we should think about going forward for the feasibility of interactive applications in general, ever shiny or not, and how they fit into a submission process. So I think it's good we're going through it. I'm just, yeah, luckily the solution wasn't quite as difficult as I thought it was going to be. Yeah, so if we can go forward with that, I would prefer if possible to defer a complete resolution of the broader issue, what should be interactive and what shouldn't. My feeling is that any time we do inferential statistics, it needs to be pre-specified. Paul, let me poke at this a little bit, because I think what we're bumping up against are the internal procedures of the FDA. So for example, if we did not submit a shiny app, but your analysts happen to have shiny on the workstation, then anybody could load it in and do inferential analysis independent of the convenience, say, of us submitting the shiny app and having the app in the submission. So it seems to me you're saying that these intentions are, whether you're going to do an inferential analysis or exploratory is part of the intention of the analyst. And it seems to me you're saying when you have one intention and when you have another intention are somewhat codified within the FDA. I would even defer to the ASA statement on or special edition of the American statistician on p-values. I mean, there's extensive discussion of various circumstances. What we don't want to somehow seen as endorsing is a common practice, semi-common practice in certain groups of, quote unquote, rescuing a, I would again use the word air quotes, failed clinical trial by torturing the poor data. Yeah. So we want to avoid that issue altogether. To some extent, one sees one can see that even now with tools such as jump that clinicians and others can use that are interactive but do not produce a fully open and transparent analysis trail. When I was a reviewer and I had to deal with clinicians who said, I have this result in jump and I said, well, OK, did you enable some sort of tiring or ability to replicate your results? Usually the answer is no. And then we have to say you're planning to submit this to the Journal of Irreproducible Results, right? So that's the problem. With any interactive tool, we want it to be open. We want it to be transparent. And we want it to be reproducible. Beyond that, we also want the tool interactive tools that we use to be consistent with good statistical practices. And I would argue that an interactive tool that produces P values on demand is not consistent with the ASA statement regarding P values. I mean, there are some divergent opinions to that as well. But let's hold that thought again. So what you're saying, Paul, first off, is that the ASA is pretty good foundation for us to reason about what might be consistent with FDA practices. So that's a good foundation to start with. Yes. Yes. Considering that our office's deputy director is the current ASA president and a past president has been Lyssa LaMange, who was my supervisor at one stage. And I think it was under her tenure that the statement on P values came out. And there's room for discussion and interpretation within that. But one of the issues that that attempts to partially address the so-called crisis of reproducible research. There have been various papers that have said some proportion of published research is essentially irreproducible. Yes. That is an industry problem. Let me explore a little bit more. Suppose, though, like a fantasy here, but suppose that every time you went through a path of exploratory analysis in shiny, it generated code to produce that. Now, theoretically, that would be, you know, you would have these reproducible paths. It would be maybe impossible to use. You would have that ability. Isn't that baked into Joe Chen's visit? Shiny meta. Shiny meta. Yeah. We've been thinking about that a lot in our company. You may recall, when we were discussing this earlier, I actually suggested that as a potential approach that ideally could be utilized. Would you say we should explore that in the future? It is incorporating shiny. I think that would be a great idea. But that's me. I mean, that that is an opinion of Paul, not the voice of FDA. Right. I think we all can agree being able to take what an insight that either a reviewer or even frankly, for those of us on the sponsor side, generate with a powerful tool like this and be able to get a reproducible script to run that, that same interaction, but in code, I think we all agree. That's usually important. I would definitely agree with that. I would hope that every tool has such a capacity. Me too. Now, I will say, and I don't want to get off in too much conclusions, but I'm sure we have a lot more to talk about, but shiny meta is the most promising part in this space, but it's also one of the more difficult from a developer perspective that kind of shift an existing app to use it. I'm hopeful it gets easier to use, but I've definitely been talking to the shiny team about its potential and maybe hopefully improving it. I think there's kind of, we're dealing with this on our internal groups. It's relatively easy to design a prototype shiny app for many purposes. I mean that, yes, even basic ones can involve a little bit of, shall we say Anglo-Saxon terminology during the coding phase, but I think one of the key things is that shiny makes it a lot easier and has more accessible to a much broader groups to develop apps and dashboards and the like. Moving into production I think is a greater challenge. Absolutely. And having something that is going from prototype to production, I think there's even a book out there on that. Yeah. Or workshops as well perhaps, yes. Yes, workshops and books and that's a major issue. And in some ways part of that reactive programming and all of that is I see that as part, correct me if I'm wrong Eric, you're much more knowledgeable than I am, but I see that as part of the entire, one entire challenge. Oh, yeah, I'm full agreement of that. And there's a lot I can say about that space, but yeah, there are definitely some venues I hope to spread my opinions across as we go forward this year. So, yeah, so I would argue that particularly for various analyses, if they will be potentially used for regulatory decision making, it would be, I would say essential that it have some sort of openness and transparency capability. Yeah, I think Chinese meta is definitely one path. There could be others. Yeah, and thanks to Sam in the chat for, I think, referencing one of Teal's potential solutions to this. So I'm not familiar with QN for however you'd like to pronounce it. So I'd like to learn more about that. I don't know if Ning or others could share. I feel like they have been, the Teal team has been making pretty good progress on that. I'm hoping the latest release, maybe we can like take a look offline. All right, we're running out of time. So, hey, Sue. Yeah, we can, we can move on to the next one. And then the warning message. So I keep, I got, I found a constant warning message with this one. I can still run the R China app with this warning message below here. But we recommend using plan or more to do rated repository for sourcing packages, better than like, I don't know what this is, but. Yeah, I believe that's for Teal specifically, not to defer to Ning, but I think that was the installation instructions that I used from the GitHub repos to use that specific one for that. Yeah. And I think that team are also sent in the process of sending the package to crime, but let me double check. Yeah, if it's on crime in the future, it should be easier, I think. Yeah, that would be great if it's on there. Our security folks are starting to make noises that they're going to crack down on packages. And we've been arguing that crayon is curated. It's gone through the checks for malware, etc. So that we should not be worried about crayon installations. And we can even list an Oak Ridge site as one of the mirrors, for example, that one would want to use. We're not saying crayon is the only option since we have other discussions, but you know, some sort of curated, maintained, updated, at least some guarantee that it's been checked for malware and other types of things would be appropriate. Right. We have a, the whole, our validation hub is looking into standing up a repo for precisely doing submissions and this kind of thing. So, yeah, that's part of the reason why we were trying to say, come up with the idea of curated. Yes. So I think if we can include those sorts of dependencies that in the long term, that's something that would help us with our security obligations. Great. And next point is, so in filter, filters can be conflicting, I guess. So there are two ADS, two data set ADSL and ADTT data set and there are multiple treatment variables like TR201P, TR201A, and here also multiple, I mean, several different treatment variables. And I was just playing around with this filter and like if I choose to filter from two different data set and like if I exclude the possible, but I can still include possible here. And if I exclude behind those things, like it giving us an error. So the user, we're going to know it's giving an error and something wrong, but maybe it might be good to provide like a way to deal with this error like for example, if you exclude the possible or giving us an error, like it has to be like the treatment filter should be consistent across tables, something like that. A warning message instead of just an error message. Yeah. I mean that that's the easier fix from my perspective instead of the cryptic error message. That we say there could be an issue with your applied filters or something to that effect. I'd have to double check within TO itself when you have multiple data sets filter what are the possible linkages between the two. If there's any additional help, I can surface from that. I'm not sure, but I will definitely research it. Yeah. Or is there a way to condition? Like. The condition if you exclude the placebo, there is no way to include placebo in other treatment variables, something like something like that. Yeah, I'll have to play with it a bit myself to see, but certainly if NING or others, if you know of a solution offhand, I'd love to hear it. Yeah. So here, these are one of the error messages when we choose different conflicting filters. So other thing is that Paul and I talked about maybe providing a README file might be useful for the user or the reviewer because like list of the old tensure error messages, like how you like noted here. So when you run this code, you can get the error message, but it's not going to prevent you from running the iShineF. So maybe providing a README file that shows all the potential error message or all the guidance, like what you need to do to run the app properly. Would you prefer that being in this existing ADRG or like in the app itself, we have that, you might call welcome or home page before you get to the actual modules. Would you like something in that or a combination of the two? It doesn't really matter. Yeah, in some ways, it might be good to replicate some of that information in the app just because access to the ADRG, having the app be kind of standalone would be ideal. If that is also decided to be replicated in the ADRG, I don't think we have objections to that. Okay, my initial thinking is we're going to have a separate tab called usage guide or something to that effect in the app itself, and that should hopefully take care of it for the most part. Okay. Does that satisfy your concern, Hesu? Yeah, yeah. Okay. I think that's all for so far. Yeah, that's all the issues that I identified. Other than that, it seems good, but I have to explore more, I guess. So those are the potential issues that we identified. Thank you, Hesu. Maybe one quick question, like I think for some of the point of your mention, I think they're easy to address. For example, for the cram, for the error message about retrieving package from GitHub, if we couldn't resolve that in the next month or so, is this okay? To a pause point, just wrap this up, but maybe you guys can put a recommendation in your written response, just saying that in the future, you recommend putting package on cram or something like that. And then we wrap it up with the uncompleted. I'm just thinking that like for the warning message you mentioned where it call out that's like we are retrieving packages from GitHub instead of cram. If we couldn't submit it fast enough, then can we just put a, maybe you guys can just put a recommendation message in your response instead of us waiting for the cram submission. Yeah. I think so, yeah, that makes sense. Because the error message is just a warning message. I can still run the R China without any issues. So I think so, yeah. Thanks. Do you think so, Paul? I think that's appropriate for a prototype. If we were in a non-pilot production, it might not be quite as well regarded. Yeah, that makes sense to me. And obviously these pilots are illuminating these kind of issues. So hence we're getting a lot of valuable input from seeing this in action from what you are seeing on your installation side. So the good news is I fixed the major filtering application issue. Now we'll work on the usage and hopefully making those error messages either hidden or at least have a much more clear explanation on why those are occurring. So this has been extremely helpful. Are we recommending that HESU like pause in her evaluation until we fix these things we've identified? What's the practical next step forward? Hmm. Some of you are frozen. No, I was thinking if there were other issues that don't depend on what we just discussed, I'd like to hear about them sooner than later. I don't think we have to hold up the rest of the evaluation for fixing these. Okay. That's my opinion. What do you think, Paul, HESU? I think the, well, if we can get an updated package and with those, it would certainly be helpful because then we can actually, we have to, the response, our official response has to go through approval before it can be released. Okay. So if we can get, you know, a semi-formalized endpoint there with, you know, this is the, the final app. And then we could write our response off that particular one that's helpful so that people don't have to review responses multiple times. Sure. Yeah. It makes complete sense to me. So I think we'll have to coordinate much like we did in the first transfer at a time when I can remember who we were working with on the FDA side for the ECTD actual transfer. We'll have to coordinate something like that again. Yeah. With some help with getting that lined up. But I'll certainly, I have enough information right now to work on everything we talked about today at least. Right. Hopefully get that. And we found out the hard way that basically we have to initiate a pull from a new submission. So basically with non-prototypes, there's, once you, it comes in, it gets populated and it's pushed out to us with prototypes and the different types of environment. We have to say, we're interested in this. We need to have it pulled in. Okay. I think that's the final resolution we had when an email exchange. If I recall that correctly, is that mostly correct? Yeah. Okay. Yeah. So we're going to do pilot two take two. If I recall correctly, pilot one also had a similar situation. Yeah. It's far from the course then, isn't it? So it does suggest that. Yeah. Well, one thing is in the future is that it iterations may be needed. An early submission would be helpful for iterations. Yeah. We were trying to kind of be in the middle. There's somewhere by having that hosted version of the app that you all could look at on shiny apps. I owe as a way for you to get that near real time, you know, to look at what we were doing in a stage fashion. But I do understand that you, you need the more quote unquote formal transfer process to actually do your evaluation. So we were trying to kind of mix it together a little bit. So yeah. And then things have evolved even since what we looked at initially. You bet. Absolutely. But my first pass will be to deploy a new version on the application. So we're going to have to do that. We're going to have to make sure it works amongst our team here. And then we'll send that link to you again, just in case you want to poke at it before the actual transfer. Okay. We'll, we'll take it from there. Yeah. Well, I have one more question. Do you expect that the work you alluded to. By the security team at the FDA. We'll use it in a formal document of some sort of So they might. We're in discussions with the. One of the groups I'm involved with. The scientific computing board. Is in discussions with. Our office of information management and technology. And. There's some discussion of trying to set up a working group to. Internal working group to address some of those. I don't think our is the main. Problem. I think it's actually Python. To my ears. Well, I'm not an expert on Python, but I think some of their. Essentially the equivalent of their mirrors were corrupted by malware in the past. That is true. It's a much bigger attack vector than, you know, Cran has ever been and probably ever will be. So there needs to be a lot more scrutiny around those things. So my solution is just to say, Hey, our Cran. You can trust it, Python. But not everyone agrees with that approach. Right. It'll be short lived anyway. They'll catch up. So we hope so. Let's see. We probably need to get going. We have a another. Meeting to head off to. Yeah. Let's see. One thing. We might want to flag is we're getting. Multiple groups approaching us now. About using our. In some sense, validation issues, etc. We would prefer not to say that. Basically, we might want to set up some sort of way of. Trying to resolve these across the entire spectrum. I know we want to be decentralized, but. It seems like there's a lot of replication and duplication of efforts out there. And I didn't know. Maybe that's something. We're at our time that could be discussed at our next meeting. Yeah, I like that. I do. The validation issues. Val. Well, also, you know, who's going to do what. We've been pinged on by Transcelerate. We're being pinged on by fuse. We've been pinged on by Pharmaverse. We would like to, at least I would like to have a coherent message. And probably we need to broaden our base within FDA. I've reached out to John Scott and Ciber. So that right now we're, we've had a Cedar based discussion. But obviously, since we're using the same gateway. We would at some stage like to broaden it and bring in Ciber as well. All right. For the next time, I will ask somebody from the validation hub. You come. Would be good to have them to the discussion. We could also invite people from fuse if you know who they are, if anybody can make recommendations. Okay. I think like, maybe we can have a discussion offline. Sure. We can forward and try to come up with an appropriate umbrella. Excellent. Okay. All right, so Paul, let me know when you have time to talk and I'll make myself available. Today's pretty busy, but let's, let's touch base next week, perhaps. Okay. Thank you everyone. Paul, I think everyone is waiting for us. Okay. Bye-bye. Thank you everybody. Sorry for the interruption to get started, but you're all very resilient and. Okay. Award for coding under pressure. Take care. Talk with you. It happens. I've been through it. I was going to say for those that are still on, if any of you are attending fuse connect next week, come say hi. I'll be going over Monday.