 Hi, my name is Jeff Denworth. I'm a co-founder of VASdata. And today we're here to talk about DAIS, the Disaggregated and Shared Everything Architecture that's the basis for the VASdata platform. DAIS is the core of everything that we do and how we build distributed systems. And so we wanted to first start and explain how we think about distributed infrastructure and why the inventions that we built are so powerful for the AI era. And really everything starts with essentially an idea that we want to simplify every single aspect of data and infrastructure management and processing. On top of that, we really want to embrace standards such that customers can bring their data and their applications to our systems and essentially they don't need to refactor everything. And so as an enterprise infrastructure, what happens is that your data essentially just becomes ready for the next generation of AI as opposed to having to build different systems that you use for AI processing. And finally, ownership is also everything where we want our customers to own their data and be able to deploy that on their infrastructure of choosing. We don't make these choices, we're not the stewards of infrastructure or data for our customers and they can build their own zero trust environments as they see fit. So what we're doing is we're building AI scale distributed infrastructure and I'm purposely not saying storage right now because there's a broader definition in play here. Roughly about 20 years ago, the first Shared Nothing systems were introduced to the world and since that time, there's been hundreds of billions of dollars of systems that have been introduced into the world that kind of are in the spirit or mirroring that style of architecture that Google brought. That system was really built for a style of processing that is less and less relevant today. You know, if you look at this through the lens of just a storage environment, then that becomes a pretty myopic view. There's never been one solution to solve all the problems of performance and capacity. We realized that there was a lot of opportunities for invention based upon all the challenges that we saw with that classic Shared Nothing architecture which didn't solve for all of the problems of scale up and so we decided to build beyond the state of the art and it starts with an approach that we call DAIS. DAIS stands for Disaggregated and Shared Everything and it starts by decoupling the CPUs from the underlying storage devices and building systems that are interconnecting CPUs with storage devices over next generation commodity low latency storage fabrics. We had to work a lot on a data structure that basically made it possible for all these CPUs to access the same shared state without having to coordinate with each other and when you get there, you basically have a system that's designed for crazy levels of parallelism. Parallelism that's required to take these systems beyond the classic use cases to also support applications like high performance computing, like distributed databases and of course, next generation AI applications. We're building systems that scale into the hundreds of petabytes range and that is really never been seen within a distributed systems architecture where you can get to that level of scale with that level of availability in a single systems architecture but that's what we designed for from day one. This new approach allows you to scale to tens of thousands of CPUs, exabytes of low cost flash infrastructure and we apply a number of efficiency algorithms against your data such that you can really afford one tier of flash for all of your data. Now that you've got this kind of the basis for your data store, the next thing that we wanted to do was to essentially build a mechanism that allowed us to manage data better than what you've been able to do in the past and it's basically a landing zone or a shock absorber for the system that allows us to place data into a part of the system before the data gets flowed down into very, very low cost web scale flash. You can snapshot across the whole system at any directory depth and you can have up to millions of snapshots running in the system and that snapshot engine becomes the basis for all sorts of other types of data manipulation that we do. It becomes the basis for a distributed replication engine data cataloging becomes the basis for all of our efficiency and most importantly, we can run global compression mechanisms by correlating data at global scale before that data gets flowed down into low cost flash. You can deploy it in any choice of locations from cloud to edge and so now you can build a single data plane that runs across all the different spaces that you may compute in a global namespace that we build called the vast data space. What comes from this is essentially a foundation for the next generation of computing functions and logic that's married with state. You can build data center scale computers interconnect those computers all around the world and it's infrastructure that's essentially more affordable than any approach before more parallel and more scalable than any approach before and simpler to manage and more resilient than anything that's ever been built before it.