 From the CUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. Hello, welcome to today's session of the AWS startup showcase. The next big thing in AI, security, and life sciences. Today featuring Tetra Science for the Life Sciences Track. I'm your host, Natalie Erlich. And now we are joined by our special guests, Michelle Bradbury, VP of Product at Tetra Science, as well as Mike Tarselli, the chief scientific officer at Tetra Science. We're going to talk about the R&D data, cloud movement and life sciences, unlocking experimental data to accelerate discovery. Thank you both very much for joining us today. Thank you for having us. Yeah, thank you. Great to be here. Well, while traditionally slower to adopt cloud technology and R&D, global farmers are now launching digital lab initiatives to improve time to market for therapeutics. Now, can you discuss some of the key challenges still facing big pharma in terms of digital transformation? Sure, I guess I'll start in. The big pharma sort of the organization that we have today happens to work very well in its particular way, i.e. they have some architecture they've installed usually on premises. They are sort of tentatively sticking their foot into the cloud. They're learning how to move forward into that and in order to process and automate their data streams. However, we would argue they haven't done enough fast enough and that they need to get there faster in order to deliver patient value and efficiencies to their businesses. Well, how specifically now for Michelle, can R&D data cloud help big pharma in this digital transformation? So the big thing that large farmers face is a couple of different things. So the ecosystem within large pharma is a lot of diverse data types, a lot of diverse file types. So that's one thing that the data cloud handles very well to be able to parse through, harmonize and bring together your data so that it can be leveraged for things like AI and machine learning at large scale, which is sort of the other part where I think one of the large sort of challenges that a pharma faces is sort of a proliferation of data and what cloud offers specifically is a better way to store, more scalable storage, better ability to even tear your storage while still making it searchable, maintainable and offer a lot of flexibility to the actual pharma companies. And what about security and compliance or even governance? What are those implications? Sure, I'll jump into that one. So security and compliance, every large pharma is a regulated industry. Everyone watching this probably is aware of that. And so we therefore have to abide by the same tenants that they would. So 21 CFR part 11 compliance, getting ready for GXP ready systems and in fact doing extra certifications around SOC2 type two, ISO 9001, really every single regulation that would allow our cloud solution to be quality, ready, inspectable and really performant for what needs to be done for an eventual FDA submission. And can you also speak about some of the advances that we're seeing in machine learning and artificial intelligence and how that will impact pharma and what your role is in that and such a sign? Sure, I'll pass this one to Michelle first. I can take that one. So one of the things that we're seeing in terms of like where AI and ML will go with large pharma is their ability to not only search and build models against the data that they have access to right now, which is very limited in the way they search, but the ability to go through the historical amount of data, the ability to leverage mass parallel compute on top of these giant data clusters and what that means in terms of not only faster time to market for drugs, but also I think more accurate and precise testing coming to in the future. So I think there's so much opportunity for this really data-rich vertical and industry to leverage in a lot of the modern tooling that it hasn't been able to leverage so far. And Mike, what would you say are the benefits that what a fully automated lab could bring with increased fairness and data liquidity? Yeah, sure, let's go five years into the future. I am a bench chemist and I'm trying to get some results in and it's amazing because I can look up everything the rest of my colleagues have ever done on this particular project with a single click of a button and a simple term set in natural language. I can then find and retrieve those results, easily visualize them in our platform or in any other platform I choose to use. And then I can inspect those, interrogate those and say, actually, I'm going to be able to set up this automation cascade. I'll probably have it ready by the afternoon. All the data that's returned to me through this is going to be easily integratable, harmonized, and you're going to be able to find it, obviously. You're gonna interoperate it with any system. So if I suddenly decide that I need to send a report over to another division in their preferred Viz tool or data system of choice, great. I click three buttons, configure it, boom, there goes that report to them. This should be a simple vision to achieve even faster than five years. And that data liquidity that enables you to sort of pass results around outside of your division and outside of even your sort of company or division to other people who are able to see it should be fairly easy to achieve if all that data is ingested the right way. Well, I'd love to ask this next question to both of you. What is your defining contribution to the future of Cloud Scale? Mike, you want to go first? I would love to. So right now the pharmaceutical and life sciences companies, they aren't seeing data increase linearly. They're seeing it increase exponentially, right? We are living in the exabyte era and really have on the internet since about 2016. It's only gonna get bigger and it's gonna get bigger in a power lot, right? So you're gonna see as sequencing comes on, as larger form microscopy comes on, and as more and more companies are taking on more and more data about each individual sample, retaining that data for longer, doing more analytics of that data, and also doing personalized medicine, right? More data about a specific patient or animal or cell line. You're just gonna see this absolute data explosion. And because of that, the only thing you can really do to keep up with that is be in the cloud. On-prem, you will be buying disk drives and out of physical materials before you're gonna outstrip the data. Michelle? Yeah, and I think to go along with not just the data storage scale, I think the compute scale. Mike is absolutely right. We're seeing personalized drugs. We're seeing customers that want to, within a matter of three, four hours, get to personalized drug for patients. And that kind of scale on a compute basis, not just requires a ton of data, but requires mass compute ability to be able to get it right, right? And so it really becomes this marriage of getting a huge amount of data and getting the mass compute to be able to really leverage that for a per patient. And then the one thing that, enabling that ecosystem to come centrally together across such a diverse dataset is sort of that driving force. If you can get the data together, but you can't compute it. If you compute it, but you can't get it together, like it all needs to come together. Otherwise, it just doesn't work. Yeah, well, on your website, you have all these great case studies. And I'd love it if you could outline some of your success stories for us, some specific concrete examples. Sure, I'll take one first and then I'll pass to Michelle. One really great concrete example is we were able to take data format processing for a biotech that had basically previously had instruments sitting off in a corner that they could not connect were integratable for a high throughput screening cascade. We were able to bring them online. We were able to get the datasets interpretable and get literally their processing time for these screens from the order of weeks to the order of minutes. So they could basically be doing probably a couple hundred more screens per year than they could have otherwise. Michelle? We have one customer that is in the process of automating their entire lab even using robotics arms. So it's a huge mix of being able to ingest IoT data, send experiment data to them, understand sampling, getting the results back and really automating that whole process, which when they even walk me through it, I was like, wow, I'm like, so cool. And there's a lot of, I think a lot of pharma companies want and life science companies want to move forward in innovation and do really creative and cool things for patients. But at the end of it, you sort of have to also realize it's like their core competency is focusing on drugs and getting that to market and making patients better. And we're just one part of that really helping to enable that process and that ecosystem come to life. So it's really cool to watch. Right, right. And I mean, in this last year, we've seen how critical the healthcare sector is to people all over the world. Now, looking forward, what do you anticipate some of the big innovations in the sector will be in the next five years? And where do you see tetra science is wrong in that? So I think some of the larger innovations are, Mike mentioned one of them already. It's going to be sort of the personalized drugs, the personalized healthcare. I think it is absolutely going to go to full lab automation to some degree because who knows when the next pandemic will happen, right? And we're all going to have to go home, right? I think the days of trying to move around data manually and trying to work through that is just, if we don't plan for that to be a thing of the past, I think we're all going to do ourselves a disservice. So I think you'll see more automation. I think you'll see more personalization and you'll see more things that leverage larger amounts of data. I think where we hope to sit is really at the ecosystem enablement part of that. We want to remain open. That's one of the cornerstones. We're not a single partner platform. We're not, we're not tightening vendors. We really want to become that central aid and the ecosystem enabler for the labs. And I also love to get your insight. Sorry, thank you. To that point, we're really trying to unlock discovery, right? Many other horizontal cloud players will do something like you can upgrade files or you can do some massive compute, but they won't have the vertical expertise that we do, right? They won't have the actual deep life sciences dedication. We have several PhDs, postdocs, et cetera on staff who have done this for a living and can do this going forward. So you're going to see the realization of something that was really exciting in sort of 2005, 2006, that is fully automated experimentation. So get a robot to about an experiment, design it, have a human operator assist with, you know, putting together all the automation and then run that over and over again cyclically until you get the result you want. I don't think that the compute was ready for that at the time. I don't think that the resources were up to snuff, but now you can do it and you can do it with any tool instrument technique you want because to Michelle's point, we're a vendor agnostic, you know, partner networked platform. So you can actually assemble this learning automation cascade and have it run in the background while you go home and sleep. Yeah, and you know, we often hear about automation, but tell us a little bit more specifically, what is the harmonizing effect of Tetra Science? I mean, that's not something that we usually hear. So what's unique about that? You want to take that or you want me to go? You want me to go, please? All right. So really it's about normalizing and harmonizing the data. And what does that, what that means is that whether you're a chromatography machine from let's say waters or another vendor, ideally you'd like to be able to leverage all of your chromatography data and do research across all of it. Most of our customers have, you know, machinery that is of same sort from different customers and all right, sorry, from different vendors. And so it's really the ability to bring that data together. And sometimes it's even diverse instrumentation. So if I track a molecule or a project or, you know, a sample through one piece of one set of instrumentation, and I want to see how it got impacted in another set of instrumentation or what the results were, I'm able to quickly and easily be able to sort of leverage that harmonized data and come to those results quickly. Mike, I'm sure you have a... May I offer a metaphor from something outside of science? Hopefully that's not off par for this, but let's say you had a parking lot, right? Filled with different kinds of cars. And let's say you said at the beginning of that parking lot, no, I'm sorry, we only have space right here for a Ford Fusion 2019 black with leather interior and this kind of tires. That would be crazy. You would never put that kind of limitation on who could park in a parking lot. So why do specific proprietary data systems put that kind of limitation on how data can be processed? We want to make it so that any car, any kind of data can be processed and considered together in that same parking lot. Fascinating. Well, thank you both so much for your insights. Really appreciate it. Wonderful to hear about R&D data cloud movement in Big Pharma. And that of course is Michelle Bradbury, VP of product at Tetra Science, as well as Mike Tarselli, the Chief Scientific Officer at Tetra Science. Thanks again very much for your insights. I'm your host for theCUBE, Natalie Ehrlich. Catch us again for the next session of the AWS startup session. Thank you.