 As the title says, I'm here to talk about how open source solutions based on R and shiny are transforming workflows and processes in the industry. And I'll start with the question. So did you know that open source software represents two thirds of commercial code basis, and only for around 4% doesn't contain any open source components at all. So that's one in every 25 applications. And this report is was released in 20 for 2023, and it's quite quite impressive. Open source is clearly the future of software and business and I think I don't need to preach that to this audience. The future is not only open source, but there's no future without it. And during the early days of epsilon, we were inspired by the community and role models who had created so much for all of us. And the initial team wasn't sure about interests, about the interest that the tools that we were creating had, but it turned out to be very helpful to others and our work became increasingly popular. Actually, just yesterday, Eric showcase how to build an application in Rhino, and it's one of our open source packages. And as a result, our team got bigger with me and under great professional joining epsilon and since then we've been able to contribute much more. I'm very happy that this happened and that we are part of an amazing community of people and organizations that is more diverse than a typical technological technological community. It includes people from various areas of business science different domains various backgrounds and specialties. And at epsilon we are able to gain quite a unique perspective into how all sectors apply open source because they all do, but we also see how far they are in their journey. And pharma is one of the major industries that benefits greatly from the technology that we all love. And it's also an industry that is naturally important to all of us, as it plays a crucial role in developing treatments and promoting public health. Ironically, pharma is also an example of an industry where legacy close technologies such as SAS or even spreadsheets are still heavily used. On this presentation, I'm going to go over some of our projects with the pharma clients, and I hope that through the case studies, it will help you better understand how shiny is being used in the industry. And it's my goal to spark new ideas on how to use it and how to integrate with some other application or technology. In this first case study, we are looking at the sales department of a pharma company and the role that quality information has on connecting sales representatives in the field with the data science teams back in headquarters. And as many stories start, this one begins with an Excel spreadsheet that was being exchanged by email between the headquarters and the sales representatives. This flow of information was very one-sided with no existing method to communicate structured feedback on the data. Everything was done via email, as it normally is. It was inefficient, it was costing valuable time for the sales reps and some lost opportunities for the company. So we work together with our partner to deliver this information in a way that is optimal to everyone involved, from the sales reps to the data science team and management. The solution that we found was designed and implemented around three important requirements. First, to create the right method of delivery for the information that the sales reps need on the ground. And this means that sales reps would work with up-to-date information on clinical practices with previous visits with all the comments and all the history and a bunch of new data that will improve the quality of the visits. It also allows to write feedback to headquarters so that they can refine the data and provide every increasing quality information. Secondly, to assist the sales reps to plan their routes and schedules through a beautiful interface that is designed for the users. And this integrates geolocation data with cool visualizations that are always aware. And compared with the previous method of working with addresses and just copying, pasting between different applications, it improved the quality of life of people on the ground. And the information would also need to be accessible both online and offline, so keeping one of the important features of the previous Excel spreadsheets. And here SHINee enables us to deliver value extremely quickly. We can implement users' ideas right away by connecting the data to the interface. And that initial prototype is progressively refined into a production-ready solution with valuable input from the users. One of the core strengths of our SHINee is the reach and interactive visualization that can rapidly be created so we can explore the data. And it allows to integrate with other tools by using packages such as Plumber that exposes the data. This way, the offline application that was created in an iPad app had the same data sources as the SHINee application, with regular snapshots being delivered to it. This project shows a possible synergy of using multiple open source tools to implement the best solution. And this is a common friend that we'll see throughout the slides. In the near future, maybe we can actually use WebAssembly to package and deliver offline SHINee dashboards, instead of having to rely on other applications or other frameworks. And if we look at the business impact of this case study, the main result was the quality of life improvement for the sales representatives that positively impacted the work. During the trial period and after implementation, it allowed them to engage in more calls, more meetings and leads with healthcare providers. By being able to better plan accordingly to the location surroundings, they could rapidly change the plans in case of an accessible visit. For instance, by balancing between locations value with the proximity to other sites. Above all, it created an effective method of delivery for the information that was gathered by the data science team. We direct feedback and metrics from the sales representatives. It better reported the calls and visits, but also feedback from the sales team to the data science folks. And the discovery, the development and testing process, it helped bring these two teams together with a better understanding of one and the other. And as we know in pharma confidentiality is very important. So we are not showing the application here, but I would like to show it another app, an example that we built for Johns Hopkins that shows the capabilities of gel location visualization in SHINee. These visualizations are powered by a bunch of open source libraries, some of them we created or contributed to. But this rich ecosystem of packages is based not only in R, but also in JavaScript as everything that can be done in JavaScript can be used in SHINee. And this dashboard in particular uses a library called leaflet that provides these powerful visualizations. Using SHINee, we can create all of these elements that communicate between themselves and they don't require a very big technical overhead to handle these shared states and to integrate these all together. Moving to the next study, we are looking at a partner that had to create two reports for regulatory agencies, and those needed to be automated. It was a very time consuming process as we will see. It shows how we can use SHINee to transform a manual workflow and create a big ROI to the organization. The story of this case is also a familiar one that starts with an email that has the initial draft document that then needs to be filled by people in multiple departments in the organization. And this email thread bounces back and forth between all the stakeholders, and that they need to produce data from their sources. After a manual intensive and time consuming process it comes up with a document that still needs to be reviewed furthermore with more iterations to refine and remove all of the errors. Making sure that the final report is a GXP compliant. I think we can all relate to an email thread that has multiple attachments with names such as Final2, Final2.1 and so on. For these two reports it was a recurring event that had slightly different timelines, but with identical problems on multiple sites within jurisdiction. This actually cost a lot of time in personnel that could bring value in other ways and in more interesting ways for their own satisfaction and personal satisfaction. Or it could cost financial losses with delays in identifying issues in the inventory. So there is a lot of cost saving opportunities here. The requirements in this case are very different from the sales case study. Here we are dealing with data that is sourced from multiple different departments and with automated reporting. It's not really a flashy topic I know, but vital in the industry. And the challenge here was to integrate different data sources and validate those to produce reports that follow a flexible template that could then be modified from version to version. The whole pipeline needed to be well tested and documented so that the outcome is a reproducible report. And here again the data visualization capabilities of our Shiny are very well aligned with this project. We have a web application that previews the elements of the report and allows to adapt it according to the needs of the user. We also rendered those same exact visualizations in PDF or Word documents via mature open source R packages that provide a template framework to achieve this. And additionally we can also use other available R packages to retrieve all of those data sources such as databases, files and even computer vision and NLP modules. The impact of this case study was actually very easily quantified with those two reports being automated by the client. They were able to redirect plant management and resources that otherwise went to these reports. And the batch quality report was also able to identify issues in the inventory early on and reduce some financial losses. Above all, it allowed for faster decision making process by having near real time reports that could be accessed. There was no longer delay of hours or weeks but it actually became a matter of seconds to retrieve and process all of the data, which is quite cool. On this demo, I'd like to continue to showcase the interactivity between the different components of the Shiny application from filters, so actors and the main visualization. And this time we are looking at a survival curve. For those with a keen eye, they may recognize that this is a modified visualization from the farmer versus our ecosystem. And we use the data exploration demo with synthetic data. For those that are not familiar with Farmerverse, it is an initiative from a consortium of pharma companies that is creating tools in R for clinical trial reporting. It aims to replace the whole pipeline in SAS to R from admin data analysis to result visualization and electronic submission. The second demo is the data validator package, and this started in a project with a client, but gained a second life as an open source package to handle data validation. Just like the case study that had a data integration process that requires validation, this package allows for the definition of human readable rules. So it's quite cool to integrating in existing Shiny applications. With this package, we can actually take issues with the data as they are imported and it can validate regular numeric contextual data, but it can also recognize context aware data such as location. The initial version of this data validation tool was created as I said during the client project, but we identified the potential value for the community. And with an agreement, we launched this as a public R package. The third and last case study revolves around drug development and pharmacokinetics. And here the process started again with an Excel file that was distributed among the team members to fill with data from animal and in vitro studies. The process is similar to the last one with the added interesting aspect that this process drives drug development. And here research scientists were able to perform some data exploration and modeling with aromatic scaling human dose predictions simulations and compound toxicity prediction. This was a very minimal intensive process again that was prone to errors. So you could users were moving cells in the spreadsheet that was changing foremost for a minute and not then doing the changes. And this was not the original file that was created. And I'd also like to point out that using Excel to generate simulations is not ideal, especially to critical tasks in drug development. In this case, the randomness of the Monte Carlo simulations was not reproducible. So every iteration was lost to the ether once a new one was generated. And during the discovery process, we found that there were three key goals to achieve this project to achieve on this project. And the first was to create a platform that could import data from different sources and integrate those second audit of the formulas and replacing those with the robust libraries. And lastly, to create a central location where multiple users could contribute to the analysis, explore and save the progress in practices that we wanted to transform this process to a reproducible one. This is the third time I'll be saying some of these but it is this data heavy process is a great match for our and for shiny. We found that the existing package already performed the same function so we don't have to reinvent the wheel on modeling simulations and plus we get a web application that is visually appealing and fast to develop so we can bring it back to the users and iterate on that initial prototype until we get the production ready application. This allowed us to create reports that are direct match with what is seen in application that is traceable with all the data processing and it follows legal requirements. The business impact of this project was down to a faster and accurate decision making process on drug dosage, and it becomes faster to gather all this data. It is easy to collaborate and provides a smoother onboarding process for new hires. And I'm going to show an example that is the application that was created to identify buildings impacted by a natural disaster. So every second counts on the disaster and this application uses satellite images that were taken before and after the disaster to quantify the damage on the ground. This runs models on the pytorch on the back end and they are accessed. Those models and those results are accessed via shiny using an API shiny delivers the front end. It is hard to create applications in Python to take advantage of the vast collection of libraries that it has but shiny for Python might change this in the future, but at the time we found that we could bridge this gap by combining this to be an API and the UI was created in React. So this shows the flexibility that there that we have been using the best of each platform to both achieve the goals and match available knowledge and skills. And to wrap up the presentation. There's a common thread behind all of these case studies and it starts with automation. We saw that countless hours could be redirected to other available tasks for the organization and repetitive and data intensive work was replaced with automated workflow that validates and minimizes the errors in the process. The applications will build around the users which is critical for the success and drive those users to the new workflows, having beautiful interfaces for their contributors to the adoption. And the vision can come from within the organization using open source technologies they have a low entry barrier and they can help deliver a prototype shiny in particular connects the data to the user via very powerful tools. There's a one example that I like to highlight that happened just two months ago. I was in an exploratory meeting with a potential nonprofit and nonprofit partner and we were talking about what they wanted to do and what they wanted to show in the shiny application. And then one of the executives with the expertise in agriculture. She shared her screen and show the shiny dashboard that she with no background within computer science or are created just to help explain and better show her vision was an amazing amazing culture to be part of as it really it was nice to see the sparkle on on on the eyes of everyone there was pretty amazing. And there's also a global community that works together to improve these tools and these technologies. Every organization any individual has different goals, but by collaborating on a shared vision, we can create flexible tools that adapt to any environment. As we see specific communities that are created organically with different interests, for instance, by a conductor focus on rigorous and reproducible analysis of data from current and emerging biological assays package from IPC actually still lives there. Open pharma and pharmaverse focus on clinical trial reporting with individuals and organizations committed to these efforts. And I think to wrap up open using open source also allows to attract recent graduates that are being taught by fun are another technologies, just as we saw in the in the keynote that just happened. They will look to start their career in organizations that promote the school technologies, and they can start to produce value without the steep learning curve of learning size or different proprietary technology. And I would like you to thank everyone for listening and I'm happy to answer a few questions. So what would you like to know more about. Excellent. Thank you so much Andre. We are just about at time for the next talk there are a couple questions in the chat if you wouldn't mind following up there, particularly around package validation and documentation and then also definition of GXP compliance that you're working with. So, I'll go ahead and welcome our next speaker Andre thank you so much again for your talk. Hello, I'll follow up on the chat then. Perfect. Thank you.