 Welcome everyone to this, our adoption series webinar and discussion. As is probably always the case in these, people will be arriving, some of you will be here now, people will probably be arriving in the next couple of minutes or so, and so we'll just have a sort of slow start and introduction, and we'll get going with the main presentation from Gabe in about four or five minutes time. In the meantime, what I'm going to do is just run through a few introductory slides and sort of set up the plan for the session. Okay, hopefully everyone can see my slides now. So first of all, this is an R Consortium-led webinar with a focus on the R adoption series for the pharmaceutical industry, and the R Consortium has a much broader mission and vision beyond the pharmaceutical industry. So if you are interested in learning a little bit more about the R Consortium, you can go to the R Consortium website. I imagine most of you found this webinar through the website, so you're possibly familiar with it, but please do check out some of the working groups and so on that are within the R Consortium. One's like the RTRS working group, which Gabe is representing today, our validation hub, which is something that I represent. There's several others on there as well that are relevant to this industry and beyond the industry, as I said. This particular webinar and webinar series for those who may be joining for the first time is aimed at people who are leading our adoption initiatives. So this might be heads of programming, data science statistics. It might be people who are enthusiasts who are trying to promote R within their organizations, but it's very much pitched at people who are trying to drive change in their organization and roll out R where perhaps it's not the go-to language today. So if you haven't attended any of the other presentations of the series, they are all available. You can go to the R Consortium website, go to webinars and you can find links to previous videos and so on in this session. But the idea is that we're going to focus on how to, not why are. We won't talk too much in these kinds of presentations and discussions about why we should be using R. The assumption is that you're already at that stage and you want to understand maybe, if we think of previous webinars, we've talked about things like training, we've talked about validation and other topics relating to R that are potentially barriers to adoption. So that's the focus. We always have a key presentation that will kick off and our industries, Gabe will be running today's presentation. And then this is normally followed by a focus discussion. So this is an hour and a half today, so Gabe will be presenting the first part of that and then we'll switch over to some discussion rooms for the second part. Okay. Today's session, as I mentioned, Gabe's statistical computing science, he's going to present on R tables and generally a framework around table creation within R. I will hand over to Gabe momentarily to kind of fully introduce that topic and explain more about what he's going to talk about. We'll follow this with some discussion rooms. This is the first time we've attempted to use the Hopin platform for this. So bear with us as we find out any technical difficulties that we might face, but we'll split into a couple of discussion rooms. Gabe is going to be talking about skills evolution. So today's, as I mentioned, today's theme is on tables and one of the challenges when we're moving to an R-based reporting tool set and you'll see a very advanced tool set that Gabe is going to talk about and present is you need a different set of skills to build and maintain that tool set. So Gabe's going to lead a session looking at what that future might be, whether it's statistics and programming, data science, what are the kind of skills that we need to build and support these kinds of frameworks. And I'll be leading a session talking about multifaceted reporting. So with the theme on tables, where do we go next? What are the next steps within industry? And I'll talk about concepts briefly like analysis results data, and then we'll have a discussion around that. So without further ado, I will hand over to Gabe to properly introduce himself and his talk and I will see some of you again for the discussion a bit later. Hi, everyone. So thanks, thanks, Andy for the introduction and for sort of facilitating this larger, larger series. And like Andy said, today I'm going to be talking about reporting table generation with R and R tables, both as a package itself and also as a sort of larger case study in the place for innovation within, you know, production focused arenas such as generating the tables that are required for clinical trial submissions. So first just a little bit about me. I'm the primary developer for the modern version of what's now called R tables and I'm a statistical computing consultant. I have a PhD from UC Davis in statistical computing and I'm also a frequent collaborator with the R core team and am, you know, responsible for helping put multiple new features into the R language. Can people hear me okay? Okay, well, okay, so I'm hearing from some people that they can hear me okay. So I'll just go ahead and keep going. Right, so I have collaborated with the R core team on numerous new features that have successfully been put into the R language. And I'm also, as Andy mentioned, a member of the R tables for regulatory submission or RTRS working group. And it's sort of that hat that I have on for this particular meeting. And so that's an R consortium formal working group with members that represent multiple pharma industries, multiple a wide variety of table package authors at the US FDA as well as our studio. And there's an ongoing ongoing work in that working group to sort of assess the full feature space of what tables, what table features are required within the pharma industry in particular for regulatory submission. And we're collectively authoring a state of the field literature review that discusses how the various tooling can meet those needs that's available in R right now. And in terms of that literature review, we do have an open call for what we're calling difficult tables. So tables that existing tooling leave something to be desired when you're trying to create these tables with R, with the table packages that are in R. And so if you have any sort of table archetypes that you found particularly challenging generally, or particularly when you're trying to create them in R, we would love to hear from you. There's the at the R consortium GitHub organization. There's the RTRS WG, so the that working groups repository. And please do file an issue and describe the tables and the table features in particular that have been challenging for you or your your spouse or whoever is actually doing it. And with that, we'll move into the first, the first sort of section of this talk. Like this talk is going to essentially have three sections. The first is going to be what where I describe our tables. And then I'll I'll pivot a little bit and discuss the why. So why was it valuable to invest quite a bit of effort into into developing our tables? And then finally, I'll discuss the sort of how how we were successful, what needed to be in place in the organization for this to ultimately work. So our tables is an R package that is purpose built for creating what are called reporting tables to tables whose purpose is for the display of information rather than the storage of information. It is general cross table types. It is not specific to regulatory submission tables, though it does cover the use cases for regulatory submission tables, as we'll see. And it has a modern, expressive API that uses the pipe, which people who use are, you know, people like the pipe. So it has a sort of pipeable modern, expressive API. And so why the just a little preview of why and that is that tables are a cornerstone of the larger work that needs to happen to enable clinical trial work in R. So tables are not sufficient. If you can only make tables, you're not going to be able to do your clinical trial completely in R. But they are necessary in that you can't file a clinical trial without these reporting tables. And so if you're going to do that in R, you need a way in R to make those tables. And R tables is a foundational framework for doing that. And we'll see what that means in a second. So just a little bit quick by the numbers that at Roche, which is which is who's funding and supporting this this work. There are currently around 200 production table variant templates that use R tables in addition to things that are built on top of R tables in order to be able to make the make the tables that that your spouse ultimately need to deliver to the the trial teams. And these are across a wide variety of different table types. So you've got the standard ones that are going to be in every single clinical trial as far as I know, right? You've got your adverse events tables. You've got your demographics tables and your, you know, time to event tables and things like that, as well as some that are more specific, but still general enough to be standard tables, like, you know, the lab test result tables, the, you know, ECG related tables for things that they're doing with with that type of data and things like that. And so our tables has been very successful inside of Roche. And so it is actively being used in multiple ongoing Roche trials. And in those trials, it will be used to generate the tables that are that are in the filing when, when filing happens for those trials. In addition, starting sometime in 2023, it is planned that all new Roche studies will be using R and R tables for, for their, for their analytics work. Existing studies may continue to use the sort of legacy SAS based systems, but any newly started trial after that date will be using things that are built on top of R tables. And also even for, even for trials that are, have not switched over that are still using the sort of legacy systems, many of those still use things that are built on top of R tables for exploratory work, which I'm not going to talk too much about, but that is another major thing that you can do with these reporting tables. And that is in use even in an even wider variety of ongoing Roche trials. So in terms of availability, our tables is completely open source with a commercially permissive license. So there's no major barrier license wise for using it for commercial work. It is available on Cran. It's also developed in public. And so all the active development of this package actually happens on a public base and get hub repository with underneath the Roche organization. So you can actually see the development happen. You can file issues as a sort of non Roche employee. This is all happening in public. It is currently funded and copyright Roche. Just because they're the ones who have actually contracted with me to make this as of right now. So now we're going to pivot just a little bit. We've talked about sort of what our tables is and why it was made. So now we're going to go into some relatively robust examples of what our tables can do and how it can actually be useful to the people on your teams that are ultimately going to need to make these tables. So our tables as I sort of alluded to a little bit before is a general framework. It is not specific to clinical trial tables. How it is these features that that are ultimately required for way back now. We're back now. I apologize for that. I guess my my internet was cutting out for a second. But just to restart on this slide here. So our tables is a general framework, but it's completely in its design is informed by the needs of pharma and generating these regulatory submission reporting tables. And so as we're going to see in the in the tables that I'm about to show you, it has a lot of these features that will ultimately be useful for that. So here is an adverse event table. This should be relatively familiar to most people in that that they're working in pharma in this audience. And so we can see that there's a lot of things going on in this table. We've got a complex structure. And in both column space and row space. So in column space, we've got the we're broken up by arms and then we have been all patients. Section in addition to each arm. So those these are overlapping groups, which I'll talk about a little bit later. And then underneath there's further splitting. And this is this is this is completely fake data, but this is like a fake biomarker. So you have a low low biomarker value and high biomarker value. And then in the row space, we have system by system organ class. And then underneath that we have, I believe it's called preferred term. In addition to having some overall, some overall summarization that is, is at the top there. So we're going to talk about how you can build this table. This, this, you know, honestly pretty complex. Table. With our tables. And the way the art tables works is that you build up the structure of the table that you want. And then tell it what to do with it. Inside of the cells that are defined by that structure. So we're going to start simple. And so now we just have a tiny little table where the call. There's a single column with all the observations in it. And we just have this really basic summary. That we saw at the top of our larger table. And then once we have that, we can say, okay, we're going to build up the structure of the table that you want and then tell it what to do inside of the cells that are defined by that structure. Once we have that, we can say, okay, now I want to split by columns. So now I actually want columns for the, for each of the arms that I have. And so all that we need to do that, do to do that is split by, split columns by arm. And then it knows, okay, you have two columns and then everything else that you've told me is going to happen in each of those two columns. So now that we, now we see we have these patient counts and these event counts for each of these columns. And we didn't have to do anything extra to do that. And then we say, we're going to change this, and I'll talk a little bit about why in a second, we're going to change that analyze to a summarize. So now that, that patient count and event count is this, is a summary is a group summary. And we'll talk about what that means in a little bit. And then we're going to analyze toxicity grade. And so here we're counting, you know, any grade and then grades one and two. And I've restricted to grades one and two just for, for real estate reasons. Normally I believe it goes up to a five, but the two is enough to show what's going on. And next we say, okay. So we've got that, but I would like sort of rows sections for each system organ class or body system. And so now we've got nervous system disorders and vascular disorders. And underneath each of those, we have that same sort of analysis of toxicity. So again, when we're adding this, this splitting, we don't have to do anything extra to get, to get these analyses to happen within each of those, with each, which is within each of those sections. And so that's, that's really the core design. You can think about it in terms of facets as, as if you, as if you're in a sort of multifaceted plot. So you've got these facets and then each, in each of those facets, you have something that you want it to do. And then it will, it takes care of all of the data munging and grouping and subsetting and all of that in order to do that automatically. Underneath these, I apologize for, next we're going to further split underneath these, system organ classes to this preferred term. So you can see again, this is a subset of data. So the, so we have under nervous system, I just have headache and then I have to under vascular disorders. Now normally these tables would be much bigger, obviously. And each system organ class would have a large number of, of things underneath it, but we're just doing this for, for time or for space, I should say for space. So again, we just had to add this, this additional splitting and nothing else changed. All of the code that's, that's sort of create out is identical to what it was on the previous slide. And so the only thing that we need to do to get this, this sort of deeper hierarchical structure was just add this one little bit of splitting. And then we can summarize each of those. And we're, so what we're doing is we're summarizing the larger groupings, the system organ class groupings. So now we have counts for the system organ class groupings in addition to sort of sub counts for each of these, for, for each of these preferred terms. Right. And now we say, okay, well, that's all great, but now we want a, an additional column. So we're going to go back up. And we're going to say, I would like this column, this sort of column splitting to include a split. That is all the patients. And so this is a special case of a much more general thing that, that our table supports. So our table supports completely arbitrary overlapping groupings whenever you're doing a split. So a lot of times you'll see in, in, in package, in, in our packages or table packages or table software, you'll see sort of the all column or total column to be special case, but here it's really just another overlapping group. And our tables doesn't care what the overlap of the group is. So if you, if we had three arms to our study, which we don't again for space, we could say arm A, arm B, arm A and B, and all, at all, we could have those four, and C. And so we could have all of those groupings. And we could have groupings that overlap, but we're not the entire set. And everything else would work exactly the, how it is here. It's just that for convenience, there is this special case function that just adds an overall, all patients column. And so that again, all we're doing, we're not even adding a new splitting where we're controlling the behavior of the split by using what's called the split function, which again is completely general. It can define any arbitrary splitting of the data that's coming into it. And then of course, in terms of actual regulatory filings, it's crucial to have titles and footnotes. And so here we can have titles and footnotes that made up the name of the study, which is completely not real. And then you can see, I have a little thing on the bottom here. That's essentially some provenance information. You can, you can pull whatever you want down there, but this is just an example like, you know, where the file, the file where the data came from the snapshot date of the data and the user that generated the table. And another thing that we have is referential footnotes. So referential footnotes you can see here in the headache, we have this little one there now. And then at the bottom, we have the note that I made up all of this, right? But we're saying that these are non-migraine headaches. And then perhaps migraine is a different, is a different preferred term. I don't know if that's true because I'm not, that's not my area of expertise, but it illustrates the point that you can have these referential footnotes. And you can actually have these referential footnotes anywhere. You can have them on rows, you can have them on columns, and you can have them on individual cells. I only have this one example just for time and space, but you can put these referential footnotes in any one of those areas. And another thing while we're on this slide, the way that this is working, if you look a little closer at this code, you can see that we're addressing into the existing table to tell it where we would like this footnote to go. And so we have a really robust sort of indexing system that's based on what we call pathing. And so you can actually, it's semantically meaningful, this area of the table, this address of the row that we're actually adding our footnote to, right? So you can actually read it and know exactly what's going on. So the first step is going into body system and then which body system is it? It's nervous system. And then underneath nervous system, you go into the further split of preferred term and then which preferred term is it? It's headache. And then that addresses that row. And then you get that. So that's referential footnotes. And then I'm adding this at the end just because it got too wide before, but to actually get the actual table that we saw in the beginning, we need to have this further split in column space where you have these biomarker low and high. And so that's again, you know, following this similar theme is just a, an addition of another column split. And so we just add that there. Everything else is identical. And now we have this table here at the bottom. And it looks like I'm on top of it a little bit, but you can see that the, you know, if you look carefully, you can see that the footnotes still worked because the footnotes address was in row space. So changing how the columns or structure didn't change that at all. So that's how we get that table. And then we're going to, we're going to go ahead and do, we're going to go ahead and do the, a few other sort of core features that, that our tables has that perhaps differentiates it. All right. Sorry. So first something that may not have been clear, but the design of our tables is such that layout code, which is what this is. These are layouts. And then at the very bottom, you see the build table, which actually applies a layout to data. So all of these layouts are completely data agnostic. They don't, they don't have anything to do with, they don't have anything to do with actual data yet. They just have to do with data structure in terms of the variables that are, that are there. And what you can see in this slide is that that layout code is actually naturally parameterized between table structure and business logic. And then the third thing, which is grayed out in both places, which is sort of visual fiddling to get it to look exactly how you want. But you can see the core business logic of what you want to show up in the cells is all completely on the right. And is largely independent of what's on the left. And the left is simply what structure do you want that, those computations to happen within? And that's the sort of, okay, what are your row splits? What are your column splits? Things like that. And so that ends up being really nice in terms of you can swap out business logic really easily without touching anything else. And you can also see how if you are building something on top of our tables, then it would be easy to parameterize something that only fits the things on the right. So where you could swap out business logic, but you had a sort of core standard structure to your table that was always going to be the structure to your table. So that is a nice thing, I think, in terms of sort of allowing you to sort of think about different aspects of the table at different parts of the process of building the code. So the next thing, so you can then operate on these table objects once they're created. So they don't dump straight to file. And so here we have an example where we have a table. And then if you look really carefully, I apologize, it's probably not the most clear, but if you look really carefully, you can see that here we have, we have one digit after the decimal in the percent. And here we actually have two. And this all that we are doing is saying, okay, take the table that you already have now at this particular path. So here we have the paths again, right? Here we have the paths that semantically index in a meaningful way into these objects and saying, okay, go to this path and change the format. And you'll also note that I'm changing the format to something with more precision. Right? I'm not just truncating here. We're actually changing the format to something with more precision. So that means that the underlying table object actually has the full raw values. And it can, it can very happily say, okay, you want, you want a different format. You want a different way of actually rendering that when you are printing it or dumping it to file or what have you. That's totally fine. And then again, we have a third way where we're actually getting less precision. So you can do both. So that is another thing that you can do that, that is only possible because we have this rich object model that underlies what, how our tables behaves. Another really important thing is, I was told, is pagination. Now, pagination can sometimes seem like a simple problem for those of us who haven't done it before. But pagination of a table like this is actually more complicated because there are certain contextual rows that you need to repeat. And so here we have, we're paginating our table. And I've chosen a table that's a little bit narrower just for space again. So we're not doing the, we're not doing the biomarker splitting in the columns. But we're, we're sorry, we said, okay, so paginate my table. I want 35 maximum lines per page. That's what LPP stands for. And that paginates our table into two sections. And the important, there's a couple of key notes here, right? So looking to the second page. So the second page is the one on the right here. You can see that the title, in addition to these, the title, in addition to these summary rows are repeated. So this, this total page overall summary is actually repeated on each page. I didn't, I didn't have to tell it to, to do that. It knows to do that because of what we did way back. We said we would, we would come back to when we changed that analysis to a, to a group summary. So group summaries are contextual information. And so when you paginate within a group that has a summary, each part of the group gets that summary. The summary is repeated and that all happens automatically. And so we see we have at the top, we have these, these repeated things here. And because we're, we saw where the, the actual pagination ends up occurring with this particular lines per page is within vascular disorders between hypotension and orthostatic hypotension. And so again, we can see that this contextual information, this, the vascular disorders overall summary count is repeated automatically. And so that, that is, that is a much sort of more robust. And I'm told useful version of pagination and simply accounting lines and truncating. Another thing to note here is that, you know, while the titles and the footers are repeated for each page, the referential footnotes here, which are a little bit under me are only on the page that are, that they're relevant to, right? So the referential footnote was not repeated down here because the thing that it refers to is only in page one. It's a row that's only in page one. If that were a summary row that were repeated, it would be repeated. And so all of this stuff is going to happen automatically because our tables understands its table structure in a way that you don't get when you have, in a way, in a way. Yeah, I really apologize to everyone for the, for the connectivity issues. But because our tables understands the table structure in a way that you can't, that you can't do when you just have a pure sort of. So I apologize again for that. And so it can do all of this, all of these things automatically when, when you paginate. And next we have another nation. What's happening here is we're actually going to hold during the layouting. Pagination is simply a function of the count of rows, right? And then it's doing, it's doing some fancy things to make sure that contextual information is repeated. And so this is something different. We're saying, I want you to split the table by biomarker. We're going to split the, we're going to, we're going to split the pages, right? So low biomarker here is a completely different page from high biomarker. And this is a slightly different table just again for space. But we can see here, we had to do sort of get these sort of different readout full page tables is paginate or is, is split by split by this variable and then do page by equals true. And so that ultimately allows us to get what I've seen a lot in like lab, lab tables where you might, you might have multiple lab values that you're reading and you want essentially a table for each of them. And so you can split on a variable that says what type of lab readout it is, like what that, what, what thing you measured. And then you get this nice pagination process and everything else is exactly the same. So I think that is pretty nice. So that is, that is section, the first section where I, where I show you what our tables can do. So this is what we've been doing in when we're building our tables. Now we're going to talk about why would we do that in the first place. And so now we need to step back a little bit from individual tables and talk about sort of the effort in generating these, these types of tables in, in the context of the work that, that pharma companies are doing. So there's this famous quote, of course, the rising tide lifts all boats. And that is true if you do it right. And what I'm going to argue here is that our tables is the type of effort that rises the tide that does carry all the boats with it. And that's in fact the primary reason that it, that it was ultimately a good investment from a, you know, from a management point of view. So there, there are multiple types of efforts when you're going to create a table like this, you've got the frontline work to the actual construction and or instantiation of an individual table. And this is the sort of invaluable work that's done by spas that are, that are working with individual clinical trial teams and need to generate the tables for those teams. And then you have what I'm going to call spa enabling development. Now internally, these are called SMEs, which are subject material experts. And so I'll use that terminology sometimes. But just try to remember every time I say SME, what I'm actually talking about is spa enabling development. And I'll talk a little bit more about exactly what that means. And then you have sort of core general tooling development and that is going to ultimately going to be our tables in this, in this particular example. So the spas are responsible for the ultimate creation of these tables. They use templates and other sort of tools and implementations of business projects that are provided to them by the, by the spa enabling developers by the SMEs. And they're also completely responsible for ad hoc tables that don't adhere to the standard templates that have been provided to them. And in terms of a sort of delivery, mail delivery analogy, the spas are sort of the last mile delivery. They actually get the table to the door of the people who need it. Next we have the SMEs or spa enabling tool developers. They develop these templates and reusable templates and reusable functions that implement the business logic that, that is contributes to multiple tables. And that these, again, these are first by definition for standard tables. That, you know, many spas might need to make for many different clinical trial teams. And so you have this sort of collection of effort into these reusable, reusable templates and reusable tools. And then you have the sort of core table framework. And what this is ultimately going to do is provide building blocks and tools for that team for the spa enabling development team to use to create those templates and to create those, those reusable pieces of business logic. And it's important here and I'll mention this a few different times, but the core table framework is not targeted specifically at any particular table endpoint in either in terms of actual instantiation of tables or even in terms of the endpoint of any given table template. It is the tool that is intended to allow the creation of all of the table templates. So now I'm going to talk a little bit about sort of three different scenarios that you might be in at different points in time in your organization. And so the first is that you have these sort of largely unsupported spas. So you've got this spas, the sort of people that are going to create these actual tables and hand them to clinical trial teams. And they are doing a lot of the work for any individual table. And then you've got a sort of small amount of this business logic in terms of the SMEs, the spa enabling developers that ultimately feeds into that. But most of the work for any given tables is going to be for done by the spas. And then you don't really even have a general framework in that case. The general framework is just R and, you know, dplyr, which are very powerful, but they're not intended specifically for tables. Next, you might sort of upgrade to a robust spa enabling development effort where you still don't really have very much going on in the general research framework, but you do have a sort of much more effort going into these standard templates and reusable templates and reusable pieces of business logic. And what that ultimately means is that standing on those shoulders essentially, the effort that's required by spas for any individual table is reduced because they have these templates and these reusable pieces that they're able to use. Now, the effort is never going to be zero, but they are going to be able to be much more efficient during the creation of any individual table using these. And finally, you have this third situation, which is what I'm going to argue, the sort of R tables based efforts in Roche and body is where you actually have a robust general framework, which does the same thing for the spa enabling developers that the spa enabling developers are doing for the spas. And so what that does, that doesn't actually affect the spas too much yet, but we'll see in a moment that it will affect them quite a bit. But in this picture, it's not actually changing the picture for the spas. But what it's doing is it's changing the picture for the SMEs so that they can more efficiently and more effectively generate these templates, generate these pieces of business logic. Now, as I said, that doesn't seem to affect the spas too much. And the spas are where the sort of majority of the effort is in terms of, you know, number of bodies, number of amount of effort, person hours, all of that kind of thing. But if you think back to a little bit of something that I mentioned earlier, spas are also responsible for ad hoc tables and ad hoc tables do not get built on top of the standard templates because the standard templates are only for standard tables. So when you have ad hoc tables in the mix, we're going to run through each of these three things and I'll try to go a little bit quickly because I can see the time. The spas have to do all of the work that is equivalent to the work that was being done for them by the templates in addition to any work that they had to do on top of the templates. So you can see here that the effort that the spas are doing over here is starting way down here. It's starting where the sort of reusable template pieces are starting and they have to go all the way up. Now that's not really that surprising because this is the unsupported spas case. But when you move to the sort of robust efforts, the story doesn't change any for ad hoc tables because the ad hoc tables don't use these templates and these reusable pieces that the SMEs, the spawn enabling developers were creating because they're ad hoc tables. But when you have a general framework and that general framework can also be used for ad hoc tables, now suddenly the picture changes quite a bit. And now suddenly we do have something that the frontline spas are benefiting from in terms of these efforts like our tables, these general framework developments. Because it allows the construction of ad hoc tables for roughly the same amount of effort as it does these table templates because it has this layout engine that essentially allows you to create these templates. And so that's really from the spa perspective, from the management perspective, that's where the really big win comes in is because there will always be ad hoc tables and you don't want ad hoc tables to be so painful and so divorced from the way that you make standard tables that sometimes spas can just be blocked because they essentially have to wait for a new template to be created. Here, if they do learn the ad hoc tables framework, they don't have to wait to do that. They still ultimately, for standard tables, you want them to be supported, you want them to have these templates, but when those templates aren't there, they do have a way forward that actually is going to help them to get what they need. So that's the that's the end of round second part round two. That's the, that's the why, why are the R tables framework was was a good investment not from a research perspective but from a from a sort of actually being able to generate tables perspective. And now I'm going to sum up briefly because I'm looking at the time talk about the how how how we were successful in doing this. So there's this, there's this famous quote that supposedly Henry Ford said but he didn't actually say or there's no evidence that he said it, which is, you know, if I had asked people what they wanted they would have said they wanted faster horses, because they had they didn't have the concept of cars. So they didn't know that they wanted a car. And our tables is not faster horses, our tables is not an incremental improvement in the way that you would make tables using our before our tables existed. Now our tables the features of our tables have been incrementally added, but what our tables represents is a paradigm shift in how you can make tables using our tables also, you know, the result of novel statistical computing research or, you know, innovation, if you prefer that term but it's a it's a major transformative innovation. In terms of how tables are are doing are being made. And so how do you get that. There's there's essentially three pillars that you need in order to hold up this type of specific within a production program type of research and innovation. The first is, you need supportive of management at both the high level and the sort of lower project level. And from stakeholders, which is crucially important, and then you need to actually, you know, do the research you need to have. You need to have the capacity to do that. And so the management, that's a little bit annoying that it did that but just to click through so it doesn't look weird. So in terms of management support in terms of the our table project we had upper management support because the upper management essentially at Roche in PD supports the product owners and the tech leads and then trust them within their within projects to do the types of innovation that are actually going to help them meet the goals of the project. And then nest, which is the sort of our infrastructure effort within PD is the project within which the R tables was happening, you know at the time that was the tech lead was Adrian Waddell and the project owner was Tad Lewandowski and Adrian saw the importance of tables, both as a need like something that he had to have a solution for in order to do clinical work with our and as an opportunity. And then they devoted nest efforts to innovate in the table space, which resulted in essentially a narrow applied research program within the larger nest product, which was our tables. The tables is this this sort of little cornerstone piece of a much larger and much more powerful project which is nest which also incorporates a lot of exploratory visualizations it also incorporates the sort of spot enabling development efforts and all of these things. So sort of tables were recognized as this place where innovate where that were right for innovation and where innovation would actually benefit the product as a whole. And then now that that the nest leadership is being continued by by Pavel rookie and chime. So, next you have the stakeholders which is this SME team, which is remember the spot enabling developers. And they were responsible for the template creation. And in 2020, which was sort of the first major full year of our tables and development. What's the new version of our tables. They could go from zero tables that table template that could go that could be made in our up to 200. And they communicated with the our tables team, what they needed but they were flexible on how it worked right so this is the, I need to get to where I'm going faster, not I need my horse to run faster. Okay, this is this is the core difference that we're talking about here. And so they really bought into that and, and they were willing to invest in learning how to do how to use this tool this completely different tool that we were developing that would ultimately, and I'm confident they would agree at this point, make their lives easier once they had learned how to use it. It also resulted in truly invaluable feedback on the design and the API and what capabilities were required. And so that that feedback is really, really key we were meeting with and still are meeting with the SME team every single week to talk about sort of what features they needed what was working well in terms of the development versions that they have access to that everyone has access to because again it's public, etc. And finally, there's the, there's the actual R tables team which was doing the research. And again, the R tables team is not responsible for creating any table. Right, any individual table is not is not the job of the R tables developers which frees us to think about what tables are as a whole and how to sort of, and which is what allowed us to get to this, this sort of layouting engine. So we, the flip side of what I said about the SMEs we asked the SMEs what they needed to be able to do, not how what their thoughts were on on exactly how the internal should work. And then we had direct frequent collaboration with the SME team for a tight feedback loop. So if you knock out any one of these pillars of any one of those three pillars wasn't there, then what I showed you in the first part of this talk would not would not ultimately have resulted. So just in the final couple minutes, the next steps for our tables. So we're collaborating with our studio on a package called T gen, which essentially is going to take in our tables object and be able to render to many different output formats. Including our TF HTML so our tables already does have ASCII and HTML it does not have our TF, but so it'll have our TF it'll have HTML etc. It also has, it will have a lot of visual formatting of tables, including like, you know, coloring of cells and bolding of text and things, which, which our tables currently doesn't model. And then they're also going to be sort of this, our tables is continuing to be developed. And so one of the major thing we have a large roadmap but one of the major things is, you know, QC targeted features for comparison of tables, you know, quality checking ensuring the tables are correct and things like that. And yeah, so that with, you know, 50 seconds left on the on the hour. That is the, that's what I had for you again I apologize for the connectivity issues that were going on during the during the process but Yeah, that's that's what I had. So I think are we doing sort of general Q&A and then breakout, or is it straight to breakouts? No, we do, we'll do general Q&A very, very briefly but trying, yeah, I'll try limit it so we can get to the breakout. So just looking at what's come in and not a huge amount to come in. I gave actually I don't know if you can see the Q&A directly just to see these questions. And I wasn't looking at it while I was trying to go but yeah, no worries. So, so we've got a question from Andy Nunez, Adrian Waddle mentioned flexible table coercion in the chat. Can you say more about using flex table? Yeah, so, so we do have a sort of a currently working precursor to what will ultimately be rolled into to TGEN functionality. So we do, we do have, we do have support for exporting to flex table, which then gets you to all the formats that flex table can output to. So we do have, we do have that that is supported that is in right now. And so I know that you can, you can sort of do some of this formatting. Once you've converted to flex table, you can sort of color cells and do other things that flex table supports. You don't, you no longer have a lot of the things that that I showed you in terms of pathing and changing formats and things because that's it's a rendering essentially you're rendering to a flex table. And so it's it's it's post rendering so you don't you don't have that but you do have the like, I want to further alter the rendering by coloring and then things like that and that also does get you, you know into PowerPoint slides via officer, which which integrates with flex table. I want to say they also support RTF but I can't completely swear to that. There's a number of things that you can do. And we also do have PDF. Essentially, we can export PDF versions of the ASCII tables with pagination and things like that. So, so we have a number of different things that we have and again, that's going to be subsumed into a much larger more feature rich package, which is T gen which we're actively collaborating with our studio on brilliant. Well, thank you, Gabe. There's the road team have done a great job during the session of answering these sort of quick questions of our comments. So there were a lot of questions, but they have been answers. Thanks guys for putting those in. What are we going to do now? Well, actually, first of all, thank you, Kate for the presentation as well. And what we're going to do now is switch to the discussion room. So hopefully everyone's got the time I can there and you've got you've got another 20, 25 minutes or so. As I mentioned, the start of the title of the session slightly wrong, but gave the gable run one and I will run the other. I think there should be a banner or something appearing some point soon for you to click on. You can go to sessions and then you'll see the two sessions and and you can join from there. So we'll end this main stage piece now. But if you head to over two sessions, you can click on that and join whichever you want to join. So we'll see you all in a couple of minutes. Yeah. Thanks everyone.