 Thanks everybody, good morning. It's nice to be here in San Jose. It's, I'm not complaining about the two hour commute from the city, it wasn't too bad. It is a pretty exciting time in terms of artificial intelligence general, generative AI in particular. You know, it's already impacting so much of the work that we do for mundane tasks. I'm using generative AI tools to write a weekly newsletter to advance medical research, which is really showing incredible positive signs of breakthrough, custom vaccines, cures for diseases that, you know, up until now we thought we didn't, it'd be years and years and years and decades away from finding real cures for these disease. So generative AI is just such a fundamental new technology. In fact, it's being compared really to the early days of the internet, that sort of for generative AI right now or at the same point in time that we were sort of in the Netscape early days of the internet. And I think that really relates to this concept of openness when it comes to generative AI because just like the internet, generative AI, large language model technology was really built on openness. And that's the big message I wanna leave you all with today. If you remember, one thing is that the thing that makes generative AI so great, the reason we got here and where we're going is because the world really wants to see an open AI ecosystem. Open source software that enables us to build foundation models, machine learning models and so forth, open data, open research. Every component in generative AI, people are thirsting for openness. It was built on open tools, tools like PyTorch, open data sets, open academic research. And today, unfortunately, we're starting to see a little bit of a trend moving away from openness in generative AI and towards more closed foundation models that are only accessible via APIs. Even worse, we're seeing calls from regulators in some corners of the world to ban open source foundation models out of some vague concerns that someday in the wrong hands, this technology could be too dangerous. I find those arguments pretty vague and pretty unconvincing. And in addition to sort of these theoretical, somewhat unconvincing concerns, our actual real concerns we should have about generative AI. Things like bias, ethics, hallucinations, data privacy, security, those are meaningful concerns we should all have. But when it comes to openness, I think it's a pretty open and shut case for lack of a better term. You know, we've heard arguments before about new technology and how that technology be an open and freely available to everyone is scary. In the 1990s, the US government tried to restrict open source cryptography. They even prosecuted Phil Zimmerman, the inventor of PGP, out of fear of this open technology. And what we learned in retrospect is that one, openness in cryptography didn't actually make the world a more dangerous place. It's actually made the world a more private and a more safe place. And it's allowed a ton of new form of innovation. And most importantly, and this is a big ethic of the Linux Foundation, open innovation isn't just a way to create interesting technology. We believe it is a basic freedom of expression. We also know how futile it is to try and lock down open innovation. You know, bad actors are gonna ignore bands. It's anti-competitive, stifles competition, restrictions on openness tend to benefit a small set of incumbents. And it's also just technically kind of almost impossible. One of my favorite lines about this is from Mustafa Suleman, the founder of DeepMind, who described the futility of trying to restrict open source as kind of like catching rain. So, you know, calls for restricting open AI, foundation models and tooling are futile at best and a sort of cynical regulatory capture by incumbents in its worst case. But those are just sort of like the tactics of it. I think my favorite argument for why AI and generative AI needs to be open come from Percy Lang, the, who's at Stanford University and heads their human AI lab. And he talks about the three fundamental things that generative AI and openness of generative AI benefit from. And that is transparency, trust and attribution. Transparency into how these models actually work. You know, one of the things that is a little bit odd about large language models is we really don't quite know right now the world how they actually work. And by not knowing how they actually work, it's very difficult to get the second attribute which is trust, right? How do we know when hallucinations happen why that's happening? How do we know where the data is coming from? How do we know that privacy problems aren't going to happen? It's through transparency that we get trust in these very immediate and specific concerns around privacy, bias, hallucination and other attributes. And that gets us to the final point around the benefit of openness which is attribution. Remember, in a foundation model, the model is the data and the data is the model essentially for lack of a better way of describing it the simplest way. And it's very important that we understand where this data comes from, who created the original data, who owns that data and so forth for privacy and for participation in the value chain. You've seen tons of different arguments and controversy around how writers, photographers, scientists, academics can all participate in sharing data whether they wanna share it freely or whether they wanna participate in some way in the value chain across the generative AI economies that are just now emerging. And at the Linux Foundation and in this crowd, I don't have to argue that hard for openness but one of the things that we decided to do is look broadly at how the world thinks of generative AI and how it relates to openness. So our research group went in September and kicked off a research study to ask these very questions. And from a big picture perspective, I don't think this is a surprise to anyone, the world is rapidly adopting generative AI tooling. Half of the organizations we talked to are already implementing these tools. The majority plan to the vast majority plan to significantly invest. Most people see it as important to their business and are embedding it in their existing products and workflows. In fact, the only thing holding people back right now are security and data privacy concerns as it relates to generative AI projects in their organizations. Anecdotally, I think one of the other things that are holding people back is that people are just now waking up to the fact that in order to benefit from large language models and AI, a predicate is getting your data act in order, having good structured data accessible and many enterprises aren't very good at that. But when we ask people specifically about openness when it comes to generative AI, the overwhelming answer was we want open. Almost all of the people that we talked to said that it's important that the technology that's used to build generative AI tooling and foundation models be open and be housed in neutral organizations that they can count on for long periods of time for such a fundamental technology as this. The vast majority of organizations, 41% versus 9%, the others being sort of neutral, wanted to see open source generative AI technology as opposed to proprietary solution. All of them wanted to collaborate in the open and support the collective inner-gravation that's happening and saw openness as easing integration into their products and services. They thought that the vast majority feel that just like Percy Lang described, the transparency that comes from open source really gives them confidence in terms of increased control over their data and transparency into data that they're using for business decisions and outcomes. So I'm pretty happy to see that the world not only wants open generative AI, I'm happy just this week in the European Union, regulators released the regulation on AI that they were working on. In the original drafts, there were actually restrictions on open source foundation models. I'm happy to see that that did not end up in the final legislation. And I'm even happier to see that almost the day after European regulators issued this legislation in France, we saw one of the most high-performing open source foundation models released, it's literally almost immediately. At the Linux Foundation, it's our goal to continue to provide a neutral home to first-class open source tools for building foundation models, machine learning models, from PyTorch to MLflow to others. We wanna continue to support standards like ONIX, which provide an open format for that represent deep learning models. Standards around unified acceleration in computing. Our unified acceleration foundation provides an open source programming model for acceleration in semiconductor and technology, things like GPUs. We want to provide tools for attribution. Our C2PA, the content, the coalition for content provenance and authenticity are producing things like digital watermarks, which are now being embedded in Leica and Sony cameras that allow for data traceability across the entire LLM value chain. And finally, we wanna make sure that everybody understands, whether it's regulators or industry, that openness will allow for transparency, trust, attribution, will allow for competition and innovation. And I hope that you all join me in participating in building these wonderful tools and helping us realize the actual potential of generative AI in curing disease and fighting climate change and doing good for the world. Thank you very much.