 Hi, I'm Jeff Denworth, chief marketing officer, co-founder of VAS Data. And now we're gonna talk a little bit about the VAS Data space, which is our approach to building a global system that allows you to access and process your data from anywhere. First, it's important to establish the principles by which we build all of our technologies with. And first, we're looking to really simplify every single aspect of data processing and computing. Data space is a global approach to essentially breaking down barriers to accessing data really all around the world. And so that definitely conforms to that first principle. Second is embracing standards. And particularly when you start thinking about different cloud environments that you may wanna run applications in or put your data in, APIs and all these different technologies tend to confuse the process of normalizing your application environment. And here we're bringing all standard protocols, NAS, object, database to the table, to make things simpler for customers so that they don't have to worry about the different APIs in any environment. And finally, we're always about enabling ownership, ownership of your data where you get to decide where it's stored and where it's accessed from. We're not offering a service where we are the stewards of your data, but rather we're providing you the tools to deploy infrastructure in the places of your choosing. So the goal of the product is really to unlock access to your data from anywhere from edge to cloud. There's a lot of conversation in the marketplace today around data gravity, makes a lot of sense. People have large data stores that are typically difficult to move from cloud to cloud to cloud or on-premises to cloud or vice versa. Not enough conversation happening right now about something that I call compute gravity. And here there's a number of different reasons why there needs to be a topic of conversation today. From a supply chain perspective, it's very difficult to get your hands on a lot of processors that you may need in a quick manner. And if you have the ability to just move data where you need it to, you can start to alleviate some of those challenges. And then finally, there's the idea of resource utilization. Customers have on-premises data centers typically not just one, oftentimes they have many distributed around the world and they have cloud resources that are available to them. If you think about resource utilization, there's all sorts of variability from data center to data center. And we wanted to basically build an approach that allowed for better utilization by being able to move the data to the compute in cases where that made more sense. Thinking across all these different types of infrastructures, we started to talk with customers about all the different places that they stored and they computed on their data. And here you have silos of infrastructure that get built, particularly from a data management perspective from data center to data center to data center. That made a lot of sense. We really wanted to unify all of this so people could see their data from anywhere. But then as we started to think about and work with customers on their public cloud endeavors, what we saw is that APIs became a critical consideration to dealing with all these different cloud platforms. And each cloud vendor has their own APIs that they wanna support. And at the end of the day, that makes it really difficult for customers to move their applications from here to there in cases where there's a business reason to do so. As we started to look at this, we realized not only did we need to abstract the idea of locality away from the compute agenda, but we also needed to normalize and simplify the presentation of data as customers started to work from edge to cloud across all these different data center platforms that they could work with. So we wanted to build a global namespace, but global namespaces historically have been really hard to build. And so what did we do? We built beyond. So the data spaces are essentially our fabric that interconnects all the different systems into one global presentation of your data. And as I mentioned earlier, we're going from edge now to cloud. And the objective here is to really build a system that broke the fundamental trade-off that exists between performance that you should be able to get at the edge of the network and the consistency that gets enforced across all the different sites that you may wanna process your data on. So starting today, we're really excited to announce support for different cloud platforms that you can now deploy the vast data platform from. The data space allows you to extend into cloud platforms such as Microsoft Azure, AWS, and Google Cloud Platform. As we start, we'll basically build small representations of the namespace that you can build as single nodes. And then we'll scale up to as many nodes as you'd like to in large clusters that you can deploy in the clouds of your choosing. Nice thing about this is now we've normalized all the APIs across all the different clouds that you may wanna deploy your data services from. And it allows you to move data in and out of clouds when you wanna use these platforms for burst computing. And this just becomes part of a larger strategy that we have where we've built and support for systems that are in your edge data centers, your on-prem larger data centers and networking providers like NVIDIA and ERISTA. And now we're stitching together the cloud platforms that you may compute in into one unified namespace that runs across all of this in a seamless fashion.