 So Jillian said, thanks for a great keynote. Can you tell us how you got started at Open Source? I started as an engineering manager in 2000 at Ophoto, and it was Linux and Apache and Open Source Java that really let us build one of the first photography businesses on the internet. And without the fluidity of the code, without our ability to not worry about licensing, the only thing we had to pay for directly was our database in order to scale the distributed systems we built. That was all Open Source. We kind of took it for granted. It was just the tools that were obviously the right ones for the job. And then in 2005, I was working for Microsoft, which was a very strange turn of events because I was very anti-Microsoft, Microsoft competitor for my career before that. I joined the venture capital group and went to some conferences on their behalf. And software as a service was just starting up. One of the things that I noticed at these conferences was that nobody was talking about Microsoft anything. It was all Linux, Apache, MySQL, PHP, Java. So I came back to the company and I wrote a strategy paper, about a 10 page paper along with a few really thoughtful folks in the industry, including Andrew Akin. That strategy paper basically said Microsoft needed to embrace Open Source. It needed to make its technology very cheap, free to startups and needed to figure out how to work really well alongside Linux. Obviously that was some crazy thinking, but instead of getting fired, I got my paper was included in Bill Gates' strategy week and then that made its way to Bill Hilf, who was running Linux and Open Source for the company. And he invited me to come work in Seattle, leading Open Source and Linux technology strategy for Microsoft. And so I started that job in 2006. Let's see, take a look at some of the other questions. In regards to microservices, are there any lessons learned of what not to do that I can share? Yes, the key thing, and I didn't put too much color commentary in the keynote, the key thing that most of us who've been doing microservices for a few years up to a decade is that they're a really, really good idea to start with. The team's very excited, it's very small, it's very tight. You get a good service out in six months, you iterate it several times over the next year. About two years into the lifecycle of a microservice, people are starting to leave the team. And so you end up with a little bit of brain drain, microservice itself starts to settle down and become a little bit more stable. It sort of starts to go into maintenance mode and people forget exactly how everything works. And with that rate of change and the brain drain, people forget how to change the microservice. That's all well and good, technology should stabilize. But the details of how to change the stack get lost because in microservices 1.0, there was so much focus on giving the team all of the control, pick any technology you want, don't limit the team, go as fast as possible. And again, for the first year or two, that's a fantastic idea. But as soon as you get to the next step, which is two years in, three years in, you need to update it or modify it, you need to fundamentally standardize your stack because you're gonna have people moving between teams in a 100 person organization, a 200 person organization, you're gonna have new people need to come in, pick up new technology and make it work for you. So standardization ends up being absolute mandatory. I think I have a little background noise here. I'm gonna close my window. So the big lesson for me is that you should have freedom within an opinionated framework. So if you're gonna get into microservices for real, you need to figure out what is your application platform? What's the set of technologies you're gonna enable? So probably a fairly small number. How is your platform engineering team going to work to support all of your application developers? How many databases you're gonna use? What style of storage are you gonna enable so that you have a coherent environment that more teams can actually come through over time and that you're not creating just instant legacy? When you create that platform organization, and I've seen a lot of these at Google, at Cloud Foundry and modern organizations using Kubernetes, my last job before DataStacks, I was Cloud CTO and CIO for Autodesk. I had about 1,000 engineers in 40 different locations. We had a lot of activity going on. Our ability to govern that activity was really limited. So getting into microservices quickly is great, but coming into it with a platform strategy is much, much better because that will make all the work that you did up front useful later on. Got an excellent question here on what are your tips for becoming a more data-driven organization? How can multiple internal groups work together? I think I'll take these questions a little bit separately. So becoming a data-driven organization, there's a cultural orientation here that has everything to do with decision-making while a lot of organizations talk about becoming data-driven or how much data matters to them. If the decision style that the executives and the organization still make is like, I lead from the gut. I listen until I know the answer and then I just tell everybody what to do. That's never gonna be data-driven. So a wholesale shift towards science so that nobody's ideas are inherently more valid than other people's just because of their hierarchical position. That's the fundamental shift to becoming data-driven because if the decisions will get made and can be reassessed and compared to others based on the strength of the hypothesis, the data that we'd need to gather in order to prove or disprove the hypothesis and your ability to iterate it, that's kind of the core. Once you see management teams allocating money, changing organizations, setting objectives based on science and in a way that everybody in the organization can participate in equally because everybody can bring data, then you have a real hope of creating a data-driven organization. Without that, it's a lot harder. Now, once you've got that, then understanding how to think is really important. There's an outstanding book that I recommend without hesitation to everyone interested in this which is how to measure anything. And this is a book on how do you use data to understand quantification of uncertainty so you can make better decisions. The final piece of becoming a data-driven organization is actually have access to all of your data infrastructure. And so that is about being able to have somebody else who didn't write the microservice access that data in a way that's consistent and coherent. So data platform teams, you see that at Home Depot, you see at Target, you see fairly advanced data platform teams at FedEx and other advanced organizations. There's a whole mode of curating the data, making sure that you have good access, you've got good policy. But I would say that the technology piece is the last step almost after you've got the cultural piece nailed. Being able to have good access to data under clear policies so that by the time you get it to your data scientists or business analysts, not only do they know that the data quality is good, but that they actually have permission to use data. So that sort of advanced topic that all data-driven enterprises are talking about now and working on is data ethics. Not only do we have ethical access to the data, but the ways that we're going to apply ML to this, the questions we're gonna ask about the data, the inferences that we'll get, are those ethical or should we not be asking those questions? So that's a data-driven organization answer. With a question about how can multiple internal groups work together, I think it comes back to having a common way that you make decisions. I find that I've been at a number of companies in the last 26 years, I find that the decision-making ends up being the most broken part of internal collaboration, because you create a lot of politics and who gets to say what about what? If you don't establish a really good ground rule for hypothesis-driven, test-driven, data-driven decision-making, then you'll have a hard time getting along and it all becomes very relationship-based, which is hard for outsiders to penetrate, and I tend to think it's anti-inclusive. There's another really good question here on service meshes powering connectivity between microservices. Do we see that the data mesh is using similar technologies like Istio, Lincardia, or Kuma to keep the communication between components standard across the mesh layers, or does the data mesh have separate requirements? This has been an area of a lot of focus for me in the last year or two. Data meshes seem to be evolving in a way that is similar to service meshes, so you need a data proxy. The affordances around that proxy seem a little different from what you see with Envoy, for example, at the HTTP services layer, you've got an expectation that everything can be accessed through HTTP. With data, you're not always accessing information through HTTP. We have all high throughput protocols. We have very particular older patterns that we use, and so a data proxy has got to have more capabilities than what we see in Envoy currently. But the overall architecture that we see for a data mesh, much like the service mesh is above that proxy layer, being able to connect it with policies, with filters, with programmability, that's sort of the next horizon. You can see in service meshes, it's typical that you're gonna have the ability to write modules or filters using something that will reduce to web assembly, maybe something like Lua, something that is user programmable. We need to be able to have that so that you can have control over your data plane the same way that you have control over your service plane. So we see similar architectural patterns, but different requirements, mostly because of the speed that you need to operate at and the particularities of how data can find itself, how you can access the right endpoint for data in a distributed system. Often you have to worry about partitioning rules, you have locality of reference, you've got network locations, things that you typically don't worry about as much in a stateless architecture, where you're doing over-the-top connections through a service mesh. So these are all awesome questions. I think I'm probably just about out of time. Got a few direct messages here, a couple of questions left in the Q&A. I think I've answered what I've got there so far. So happy to take other questions offline. Hopefully this was a good use of your time. I hope you enjoyed the keynote. I think there's an amazing opportunity for many of us who are working on the next generation of microservices to come together around building an open environment and an open architecture for data meshes. One of the things that I believe fundamentally, and I've been practicing for many years, is that you need to have freedom for your own data. You can't have it locked in a single cloud. Most of the people that I talk with are trying to figure out how to future-proof their architecture. They're doing hybrid development, they're doing multi-cloud architectures, and they know that the limiting function for their ability to control their technology future is where they can move their data. Dave McCrory has given a great example of data gravity as a cognitive metaphor that's so powerful. So if you overweight your gravity into one cloud or to one premise, it's gonna limit your ability to move and escape that vendor or renegotiate your contract. So how can we collectively make sure that your application data plane is uniformly accessible from whatever cloud that you choose to use? The answer for sure is gonna be open source. It's gonna be a range of open source technologies. I care a lot about Cassandra as a database, but I'm also super interested in Spark and Flink in Pulsar and Kafka in RocksDB, right? There's a range of technologies that are all kind of being worked together into the realization of a mesh, but it's this control layer and our ability to have a policy layer to talk about what's happening in data. How do these bits actually get stored? What class of security do you use? What kinds of audit rights and post-talk analysis do you need to understand who's used the data and is it all fine? That stuff that's gonna take us probably a few years working together as an open community, probably following on the heels of what's happened so effectively in service meshes. And then I've got one final question here on some successes to share, lessons learned that can be applied to someone's own project. I had the privilege of being recruited to Google in 2016 to run the Kubernetes business as VP of product management there and the rate of adoption of Kubernetes has been absolutely spectacular. I would think I would point out two successes in Kubernetes that are interesting to me. First was the courage of how the Kubernetes organization was being led. I was the CEO of Cloud Foundry at the time and I met with Craig Mclucky who was the leader for Kubernetes. And Craig took the brave stance to say that he really liked some of the things that were happening in Cloud Foundry. He liked how the technology worked. And there were a lot of social pressures on him to create more of a hard wall and to create kind of a zero sum game between Kubernetes and Cloud Foundry at the time. So he took that step and said, hey, there's some stuff about Cloud Foundry I really like. That made it pretty clear for me that here's a really open thinker, here's probably the beginning of an open community. What can we give away from Cloud Foundry? And so I had Abby Kearns take over a project called the Open Service Broker API to take Cloud Foundry's service broker capabilities and convey that to Kubernetes. So I think that was a big success to be open-minded enough that we could create a positive sum game across these two very different projects. Later on, once I was actually at Google, I had an opportunity to look at the leadership structure for open source projects over time and looking at the leadership structure inside Google. It was a bit confused. Who was in charge of Kubernetes? How was it getting done? What was the, what was the peering and the teaming? And it's important to look at open source, not just in terms of it's a bunch of open source code, but really look in a disciplined way at the organization that is going to structure that code. It's Conway's law, right? The communication patterns of the organization determine the architectural structure of what you get out of it. So what I did as VP of PM at Google was to make sure that we had a very clean, consistent, clear organizational structure. I put a Parna Sina in charge of Kubernetes. So it was really clear there was one leader for product management. She had an amazing partnership with Chen Goldberg, who is the director of engineering for Kubernetes and the two of them were able to have a high band with relationship, get along super well. They're both excellent leaders and they brought good organizational orientation and a high trust between them. So I think that ended up creating ongoing velocity for that project. I had less and less and less to do with it as it went faster and faster and faster. So I think if you want to succeed in an open source project, look very, very hard at the strength of the leaders, the clarity of the communication they have with them and your courage in being able to extend out to other projects and have the trust available to create positive sum games across not just what little piece of the world you think you're working on, but to see the connectedness with related projects, even ones that may look competitive to you. So with that, I believe I'm out of time. I hope that those comments have been useful to anybody who's trying to solve these class problems and look forward to chatting with folks in this community a lot about data meshes in the near future. You can always find me at samatramji.org or sam.ramjiatdatastax.com or SRAMG on Twitter. So thanks for your time.