 All right, welcome everyone. I'm Joe Spezak. I'm a product leader at Meta, and I'm here to talk about product management in AI through the lens of open science. So a little bit about me before we jump into the meat of things. So if you kind of had to sum me up in a single line, I would say that would be, I'm passionate about the intersection of open source AI community building. I see just about everything I do as an opportunity to build community, do things in an open way and publish and actually work collaboratively with partners. So that's really what gives me out of bed in the morning and why I do what I do and why I work at what I think is the best place to do it. And that's Meta. So at present, as I said, I'm a product leader in Meta AI Research, which is formerly known as FAIR or Facebook AI Research. I also spend time in the VC world as an executive in residence at the Campus Ventures located in Portola Valley. So kind of my neighbor next door to Menlo Park here. Previously, I spent about four years leading a product for ML offering. So how such models were built and trained across Facebook but also a broader community externally. I'll talk more about that. That was really centered around PyTorch and open source framework, very popular today. Prior to that, I was at Amazon leading a product and partnerships in the AWS AI team. And then part of that I was with Intel leading ML strategy for their cloud group. And then before that, I spent about a decade in the semiconductor space, doing everything from designing system on chip technologies that go into mobile phones all the way through to engineering and product management strategy and almost everything in between. So my career has been about it a little bit of a tale of two careers. One in the video and chip space and then in AI for about the last almost decade now. So a brief agenda for this talk. First, I'm gonna walk through just the basics of AI and how it's developed just to baseline everyone here and who's listening. I talked a little bit about the applications. I won't go into a ton of depth but I'll show some of the cool areas where it's applied today, especially at Meta. I'll jump into why open science is important. This is something that's near and dear to my heart and something I've been working on for a number of years now but there are real reasons to operate in the open and to open up the technology that we're building. And then next I'll walk through actually the really the intent of this talk which is how AI relates to different roles but especially how it relates to being a PM and building AI first products. And then lastly, I'll give you some calls to action on ways you can dive deeper or how you can engage. You can be just about anyone. You can be a PM, you can be a data scientist, you can be a researcher but there's always some way to engage. So hopefully I can give you some ideas on how you can do that. So jumping in first and foremost into AI. So wanna be able to ground everyone into how things are typically defined and how AI is actually developed. So for those who've kind of seen diagrams like this before, there's lots of definitions to AI. I tend to use one, in this case, is any technique that enables computers to mimic human intelligence. And this could frankly be any approach. This could be rules-based systems they're called expert systems. It could be deep networks. It could be very simple linear classifies could be just fine anything. And of course, how we define AI or how we view AI is actually a moving target. What we actually see is mimicking human intelligence 10 years ago is actually shifted since then. But all that is underpinned by machine learning. So machine learning is a subset of AI that includes statistical methods that enable machines to improve with more experience, improve on tasks using more experience. And of course, experience here is data. So training on this data allows machines to perform even better on the tasks that we want them to. And then lastly, which is a subset of ML is deep learning. And this is really what the last decade of my career and what the world has been using for the most part, especially in these kind of higher end applications is really these deep neural networks. And these, from when I started, the networks were relatively small and but they were still relatively deep for the time. Now they're massive and they're lots of layers trained on just a vast amount of data and they can perform at human level on a number of tasks. So it's very exciting kind of where things like speech and image recognition and a number of other areas have progressed over that time. And so anyone who's done machine learning knows this task. This is literally the hello world of machine learning. So, you know, Yamakun wrote this paper, I believe it was in 1998 and the network was actually called Lynette. And it was a convolutional neural network that basically would classify handwritten digits and the logical application here is you're a bank and you're automating check caching. So if you've ever submitted a check or put a check into a check machine to cash it or deposit money, you basically use something similar to this. Of course, the techniques are much more advanced these days than beyond Lynette but the concept is roughly the same. I have a problem. I wanna be able to understand handwritten numbers and be able to classify them so I can actually figure out how much your check is written for. I have to train that model on data. So I might have this big corpus of handwritten digits that have annotations, meaning there's ground truth that says that handwritten nine is a nine and so on. I take this model architecture which in this case was a convolutional neural network or a CNN. I will then essentially fit that. It's essentially a function. I will fit that to the data and then basically deploy that in an application, say in the bank teller machine that the end user then interacts with. That's a very simplified view. We'll talk a little bit more about the of how things are developed but that is kind of an end-end view of things. And of course the data is massively important. This is the light flood of any machine learning model. And so you start with the training set. This is how you're, where your data is, where your model is actually trained on. This creates weights, i.e. the function that's actually developed as part of your model. You then test that on what's called the test set or even a validation set. And what you wanna do here is see how it's doing on data that it hasn't seen. So how well is it in generalizing? And then of course the data itself has annotations as I've mentioned. So the vast majority of what's deployed today is what's called supervised learning, which we'll get to that in I think one or two slides here. And so output, as I mentioned, is a model. And a model is really this broad term of the product of all this training. So I'm throwing a lot of computer things, GPUs and CPUs, I have a lot of data. I have an algorithm out pops this model. And this model is just really a collection of weights that's packaged up so that when I take an input, say a new piece of data, some propagation of things happens throughout those weights. So in this case, if I'm doing a new prediction, it's forward propagating through this network. In this case, it's a lot of matrix multiplications. That's the math that's happening. And then how comes an answer? It tells me some maybe a class that it's been, that's been recognized in the input data. So if I'm doing a dog and cat classifier, it would tell me whether it's a dog or a cat that would be a binary classifier. And so when you kind of back out into the different types of learning that happen, we typically, as I mentioned, deal with supervised learning these days. That actually means I have like annotated data. Typically this is hand annotated by people, although you can, there's ways to apply this at scale. But these pictures on the left, those are hot dogs, some of them are just dogs and so on, and you can actually learn from those annotated pieces of data. In the case of self supervised learning, which is really where things are trending, because as you can imagine, annotating data is incredibly expensive, like to pay people to say, yes, that's a dog. Yes, that's a cat. And obviously the more complex the application, the more work it takes. And in the case of say autonomous driving, you have a fine grain, you wanna really get on to say a fine grain of classifying things around a vehicle, such that it knows people, it knows dogs when they're walking across the street or street signs or other such things, that becomes very, very expensive. And so self supervised is a form of training that basically allows you to mask or partially obfuscate certain parts of the data and then your model actually learn in the process. So in a case here, we're feeding some text and the model is actually using the data that's been say pre-trained on to actually fill in that blank. And so the blank jumped over the moon, the model actually infers that it's actually looking for the word cow. And this is actually a really important paradigm because this allows us to scale some of these methods without having all of this annotated data, so which is again, very expensive. And then lastly, reinforcement learning. And this is more of a agent environment type of dynamic. So I might have an environment where I want to have an agent that operates within it. I want to generate rewards for that agent based on some goal that I've set for it. It has an action space. So you might tell it, I want you to in video game scenario get the high score on a particular game. It has a certain number of actions. You can move the controller in certain ways. You can press these buttons and it knows kind of how that it can generate rewards or it has a reward function. And what it does is actually then look through all of those, simulate or explore I should say, all of those different action spaces in order to maximize that reward. This is something that's applied today for things like robotics. We actually see this being applied in replacing A, B testing and websites. There's some really interesting applications for RL these days. So clicking down a level. So the AI workflow is interesting. Once you actually, if you've done this hands-on you'll know what I'm talking about. And again, this is fairly abstracted. So, you know, bear with me a little bit if you're very experienced in the space. So when we think about the AI workflow, again at an abstracted level, we typically start with a problem first. Like, let's actually define clearly, let's frame the problem. What are we trying to achieve here? You know, second, there's a data prep aspect. This is where you might have raw data, you might have data that you need to annotate as we talked about. And once I have that data, I can then start to prototype. I can take that data, maybe a small subset of it and I can start to take different approaches. I might have a decision tree. I might have a convolutional or that. I'm gonna have a big transformer. And I might wanna just train and actually see what I think might actually work the best. From there, I might take the large corpus of data, the full data and then train a larger model that might perform even better, hopefully, that's the goal. And then from there, it's really evaluating like how well are we doing on a desired task? And then once we're happy with the model, we deploy it into production to actually serve predictions at scale. So I'm gonna walk through each one of these phases really quickly. So as I mentioned, probably the most important part of the AI workflow is actually being able to frame the problem. And it's actually pretty interesting how many times I see people kind of come at it from the wrong way. It's like, well, I have this model, like I don't know what to do with it, but I've trained on this huge amount of data. Can it go and solve my problem? Well, typically not, but maybe, but more specifically, it's actually better to kind of start with that problem and then work backwards. And once I'm just finding that problem well, including what that input is, what that output looks like or output is and what success looks like, it's actually a much better way to kind of then figure out how to approach the problem. So once I've actually been able to frame the problem in a way that I can understand and my team can understand, we can then go and determine what data is needed. And this actually might be collecting it from other party and licensing it. It could be an open data set. It could be something we need to generate. We might need annotations, which again, takes time and money to go do. But once there, we can start to look at, how do we actually remove bias from it? How do we look at things like class and balance? Maybe it's got a whole bunch of say images of cats and I only have a few dogs. Well, it's probably gonna think everything I give it is a cat, so that's a problem. So we need to basically think about our data and remove the imbalances and the biases and other aspects. From there, as I mentioned, you're gonna prototype. You're gonna take a subset of that data. You might build a ton of really small models. So typically people, say ML engineers or data scientists will use something like Jupyter Notebook where you'll draft up something really quickly and just to kind of get a proof of concept. And this ultimately gives us something that we can then base a higher confidence on once we get to this training phase. And the reason we wanna do these prototyping, do this prototyping before we train is because typically training these days on a large model is quite expensive. We, for example, train on say hundreds of GPUs for potentially months on end. And we, of course, we check point constantly because those are quite expensive models to train and if something goes wrong, which no model I know of has ever been perfectly deterministic, we wanna be able to stop and restart and do all those cool things. And then we wanna be evaluated. So this is one of the areas where a product manager is really important is what his success looks like. And so understanding like is our model as evaluated, is it solving our problem? Do we have clear metrics, hitting those metrics? Are the outcomes for the users that we're targeting? Are we achieving those outcomes and so on? And then, of course, once we're actually ready and there's a whole bunch of things to do, like, for example, body monitoring and governance procedures in place, we wanna ensure that the model doesn't drift into a place where it's doing harm instead of good. We will deploy that at scale. So that's a very, very high level of view on the workflow. I highly recommend getting your hands dirty with machine learning. There's a ton of notebooks out there, a ton of frameworks. And so, you know, when you kind of take it all in and you understand, okay, that's kind of the deep dive into like how the sausage is made, so to speak, but what does it actually enable is really exciting. And so at Meta, for example, Visual Search, you know, uses these big computer vision models, could use object detection and instant segmentation where I'm looking at the fine grained of the image and I'm actually able to classify and segment different objects within images or even classify. So like this gentleman here is wearing some rings or he's got a jacket on or, you know, he's got a phone. So I might wanna be able to break down that image into different components and then be able to classify each component with my classifier. Or things like natural language processing. So language translation is one of the most important ways for people to communicate. You know, if you've seen the recent announcements we had, we talked about our translation efforts. This is a key for how we bring in our over 3 billion users on our platform together. And you can translate, for example, when say messenger or even on Facebook, on the Facebook blue app. And this is really how we break down those global barriers and bring people together from different countries. And then Erin's like audio and speech. So in this case, this is a Reels video and actually will automatically caption what's happening in the video. And you can just very quickly use, it's used as essentially a combination of ASR, which is automatic speech recognition, which then generates from there, you can do a speech to text and generate that text that overlays, you know, the video. And you can basically do it in essentially real time, which is pretty incredible when you think about the technology that actually underpins that. And then lastly, you know, recommender system. So this is essentially how you personalize despite anything, whether it's your feed on Facebook or what ads are personalized and shown to you or when you're flipping through, you know, videos on Reels or other areas, this actually generates those recommendations. And we actually recently open sourced the engine behind this, which is called Torch Brack. So again, we're completely, you know, open source on a lot of our technology and you know, this is kind of for everyone to build on and use to build their own applications. So switching gears a little bit over into open science, again, something that's near and dear to my heart. So, you know, let's start with the definition of open science. This is a Wikipedia definition of open science. You can't get better than that, right, or can you? But, you know, as Wikipedia has defined it is open science is the movement to make scientific research and it's dissemination accessible to all levels of society, amateur, professional. Open science is transparent and accessible knowledge is shared and developed through collaborative network. So one of the cool things about AI, and this is honestly why I got into it almost a decade ago, is it was so open, so collaborative. Things are permissively licensed so essentially anyone can build on it. And what I found really inspiring was anyone in the world can frankly have an impact. You could be in a high school in India and work on an open source project. You could be independent, you're not even working for a company yet, and you could contribute to some of the most impactful open source projects which are used in production at big companies enabling startups used in academia. It's really inspiring how open science and open source has really brought this entire field forward. And so how do we think about it? So we think about it as things like open sourcing our software. We think about data sets because data sets again are the lifeblood of machine learning. The models themselves, they're expensive to train so you want to open source as many as you can because they can provide a baseline for others to use and build on. We want to transparently publish our papers on either putting them up on archive or through other publications, say like Science or Nature. And then lastly, we do things like leaderboards and challenges. We want to provide some bogies and like targets for the community to go and aim for. And sometimes we will support these financially. Sometimes there's a really great scientific breakthrough that can be achieved if people do well on these challenges. And this is a really great way to build community. And like if you kind of sum it all up, the way we kind of think of this is really to democratize the key technologies for the benefit of people. And the things I work on today in some of these scientific areas, like there's nothing we're holding back for open sourcing as much as we can in a way that allows others to use it and build on it. And it's really exciting. So when I boil this down to really the top three reasons that open science is important, number one, transparency. I alluded to this already. Transparency is really important, especially when you're building models that can impact people's lives. So things like explainability, like why the heck did that model make that prediction? And down to the very low level, down to the neuron level, why is this model making predictions? Why is there bias in this pick the model and so on? It's important to be transparent from all stages, from the data to the models themselves. And you're starting to see a trend, and I'll talk about that in a couple of slides, of publishing things like data cards and model cards and even method cards. And this is a really important trend that's in the space. Second, open source and open science enables progress for everyone. If you've ever tried to reproduce anything in machine learning, it is incredibly difficult. So the more you can open your work up, the more others can build on top of it. In top labs, like say, fair where I am, we would hire in the past interns and they would come and they would spend months just trying to reproduce a top paper. And then from there, it's like, okay, we reproduced it, I can now start to think about how to build on top, how can I improve it and build something even better? And this is how research builds on research, builds on research and breakthroughs actually start to happen. And then lastly, there's just some of these problems where no one company, no one institution can drive this progress alone. So things like climate change. These days, more and more data sets are being opened up, more challenges are happening. It's one of those things where no one entity, not us, not Google, not Microsoft, not one NGO can solve these problems. So coming together in an open way and sharing data, sharing methods, sharing IP, collaborating, it's really important for the progress, frankly, of our planet. And you can see on the right here, I think the industry labs are important and I agree, certainly, because I work for one of them. Of course, the academics are important, they're pushing research in a number of different interesting areas. But what's near and dear to me is really the open source foundations. All of this would not be possible without frankly, things like archive where people put their papers up or frameworks like PyTorch or even the Python programming language, which is really the foundation of how a lot of machine learning is built today. And so just a couple of examples of things that I care about when it comes to open science, PyTorch is certainly one. After leading product for over four years on the projects, we have over 2,200 contributors, partnerships with Google, Amazon, Microsoft, OpenAI and lots of others. Academies using it for research, but also to teach and a whole bunch of users using it either in research or production or in both, in a lot of cases. And then another area, which is a little bit unsung, is education and AI and education. So this is something that we just recently launched, which I'm a core part of, which I'm really proud of, is something called the AI Learning Alliance. And this is a collaboration with a number of universities to create an open learning platform that's free to all and really tries to bring more diversity into the field of AI. This is master's level content and we have new courses we're building and you can go to the portal here. You can see it in action on the right here. And you can see some of our partners. You can see, this is Georgia State, this is Georgia Tech, this is, you know, Alabama A&M, ASU, a number of universities that we're working with and there's more on the way. And are really our collective goals to bring again, more diversity and inclusion into the field of AI and grow the overall pie for everyone. And it's super exciting. This is one of my favorite projects I work on. So switching gears again over into really the premise of this talk. And this is how AI relates to roles. So if you remember the workflow we talked about previously where you started with the problem and all the way to production. So I wanna highlight just the areas where PMs are involved and, you know, spoiler, they're just about everywhere. PMs are really important. So, you know, it's good. If you're a PM, you're involved in a lot of things. So even starting with the problem, the PM is incredibly important and usually leads the framing of this problem. This is typical product sense. You know, who am I aiming for as a user? Who's like, what are those user problems I'm trying to solve? What are the different possible solutions and approaches I might take? What is the North Star? These are the kind of questions you typically ask yourself as a PM, a classic product sense. In AI, data prep is actually pretty important and how we actually source our data or overall data strategy becomes incredibly important. This might be a partnership. Again, this might be strategically important. There might be data that we source or create ourselves and so on. And then, you know, prototyping and training, depending on the type of PM you are, you might actually be involved. Who knows, depending on how hands on you are. But certainly evaluation at stage five is incredibly important for the PM to be involved as we find success back in the problem statement. We wanna ensure that the model we're training actually achieving the success we want to achieve based on how we're evaluating it, the metrics we've agreed on and so on. And then lastly, through the productionization phase, this is where the team really gets together and says, you know, what do we build and is this really overall, does this feel like something we need we can deploy in production? You know, is it have acceptable levels of bias because you're never gonna reduce bias to zero probably ever. You know, doesn't meet the overall product requirements before we actually ship it in production is robust, you know, and all the other questions we're gonna ask ourselves, you know, in a production environment. So getting concrete with PM. So when you think about it as a PM, what are really those three, you know, high level questions you're gonna wanna ask yourself? So first off, when building AI first products, you're gonna ask yourself, you know, is this a people first or a technology first problem we're starting with? And I alluded to this earlier in the talk and, you know, with AI, you can actually take both approaches. So many of these innovations and these breakthroughs end up being platforms that enable other types of applications that maybe we actually don't see right away. And so, yes, I can absolutely start with, hey, I have a problem or I have users, I have a problem and then have possible solutions and I'm gonna prioritize the going just like typical product sense or what I've seen the really good PMs in the technology space do is see a breakthrough and actually start to think about like how it as a platform could enable new applications or new experiences that maybe we haven't even thought of yet. And that gets really exciting. So really both approaches are valid. The second question you might wanna ask yourself is, you know, will you make or will you buy? And in this case, what I'm alluding to is, you know, there's a lot these days of models and artifacts that are available kind of off the shelf and you can take a model and you can actually prototype something within say even a few days or even a week just to get an idea, is it actually gonna fit your application? Once you get that proof of concept done, you know, you can actually then maybe train a bespoke model, gather more data, you can do everything kind of as normal but you might actually consider like taking something off the shelf because, you know, frankly, making something from scratch actually can take on the order of months or even much longer. And then lastly, you know, your data strategy. This is incredibly important. You know, data becomes the first class citizen and everything we do as AI first PMs. So do we have sufficient data? Do we have, you know, the volume and quantity? Do we acquire it through, again, a license? Do we generated ourselves? You know, do we have to hire annotators? Luckily, you know, we have a way to have a platform for annotations. We can hire annotators. We have annotators on staff, those types of things. So, but as a PM, you need to be thinking about this and really this stems again from your problem statement and, you know, really what data you need to be able to solve the problem you're trying to solve. And then if you look at, as I mentioned, there's your system cards and Molo cards and method cards and I'm missing the data card there, but, you know, when you back out to everyone, you know, I can talk about data scientists. I can talk about marketing, about UX research, but really when everyone is involved and this is really your cross-functional team as you're building products, there's really a handful of things that you should also take into account more broadly. Number one is AI, as I mentioned, takes time to develop. So, collecting data takes time, entertaining it takes time. If you're running at large scale and you're training large models, those can take weeks, they can take months, they can take, you know, a long time to even get to a proof of concept. And then even that's really where the work begins because you actually need to maybe optimize the model itself. You need to be able to ensure that it's not gonna do any harm. So you might actually have some like business logic layers on top of it to ensure, you know, you can avoid the outcomes that might be detrimental to your users. Second, you're gonna wanna recognize the limits of AI. And I can do a lot, right? I think you can train models and do some amazing things these days, but it's not magic, it has limitations. And so, you know, kind of understanding the state of the art in a lot of these areas is really important for you as the PM to say, okay, the state of the art is this, like, you know, what we're trying to achieve is that you know, if it's really beyond what the capability of the state of the art is, then, you know, maybe AI isn't the right approach at least you had, but really understanding the limitations is incredibly important to set expectations with your team, with your organization and so on. And then lastly, there are a lot of unintended consequences from machine learning and it can do it real firm to people. And so it's really important to think through those things from a holistic product perspective and then try to mitigate and reduce as much of that harm as possible. We don't ever train models and just throw them over the wall and hope for the best. There's, you know, quite a process and governance that we go through. We try to be as thoughtful as we can because outcomes, you know, depending on, can vary deeply depending on your geography, depending on, you know, who you are, number of other factors. And then just when you kind of roll everything up here, some of the overall considerations, you know, that we think about are, you know, the governance and accountability, are there, is there oversight and now we're deploying our models? Are we, you know, our models, you know, treating people fairly and equally? So this is where we build tools for fairness and inclusion and actually be able to measure fairness, say as we're training models or in production. You know, are our models private and secure? Can we run, say, on device and keep data locally on the device? It's really important for privacy. You know, our model is actually operating in a way that's safe and acting as expected. That's where robustness and tools around robustness are important. And then of course, things like transparency and control. So, you know, are the models, are they explainable? Do they provide, you know, human agency and so on? So I want to finish up just with a few slides to kind of give you an idea on where to dive deep and learn a little bit more and kind of stay in tune with what's going on. And then I'll finish up with a few things that would be really cool for you as call to action items. So, you know, first of all, as I mentioned, this is a community. And one of the things I absolutely love about this community is everyone is approachable. You can reach out to anyone and you can ping people on Twitter. You can, you know, everyone is really approachable and this is really a cool community. So, Yama Kun is all over social media these days and responds and Andrew Ring is one of the most approachable and well-spoken professors I've ever had when I was learning machine learning. So there's just this incredible community of leaders and these are great folks to follow because these are the folks that are really blazing the trail for this space. There's also, if anyone hasn't been to paperswithcode.com, this is a fantastic site. So if you ever want to understand what the state of the art is and just about any tasks you can think of, this is the site for you. So if you look at the top, starting with the leaderboards, if I want to understand what the state of the art is in language modeling, even on a particular data set, I can very, very clearly understand and see the trend lines and it makes it incredibly actionable. So over time, you can see metrics are improving. I can actually click on that dot. I can get that paper. I can get pointer to that data set. I can actually get the code to train that model. I might even be able to get the model itself and that's incredibly impactful from a reproducible perspective or even like where you're starting your research. Like, you know, I want to build a language model. I don't even know what the state of the art is. Boom, I know what it is. I have a great starting point. Second is data sets. I think there's like over four, almost 5,000 data sets on this site now. And this is really impactful because not only can I look at the data sets based on the task that I'm trying to achieve, but each data set also has different kind of terms and conditions like licensing. Am I able to basically train on that data set and use it for a commercial application? Maybe, maybe not, but something like this really helps me to understand that very quickly, gives you kind of a summary of it, how many images in this case, I'm using the CFAR data set as an example. And then lastly, methods. So this I think is one of the coolest things about the site is if you dive deep into these algorithms, you'll find that a lot of the layers are, you know, they're just, they're modular layers. I can take a convolution layer, I can take a rectify linear unit, which is a nonlinear function, and I can use those. And it's really interesting to be able to understand and dive deep into those components, understand how they're composed, what they're trying, what they're doing, how popular they are. So if I'm trying to create a bespoke model from scratch, it's really cool to be able to understand what the state of the art is, and then basically be able to kind of use that as a foundation, even at like the lowest level. And then of course, learning. So, you know, I spent a lot of time early in my career in machine learning on course technique courses on Coursera, I've built multiple courses on Udacity for PyTorch when I was on that project. These days, you can do just about anything on any of these platforms. There's, you know, traditional ML courses, there's AI in product management, there's RL courses, medical imaging, kind of the sky's the limit, and really it's limited to how much time you have. And there's also degree options, which I think is really cool. So, you know, we, for example, work with Georgia Tech on the OMSCS program, that's the online master's of computer science. We teach a deep learning course as part of that curricula, and we use of course PyTorch for it, but you can actually get a full master's degree, and I've had actual colleagues do this at Metta, and get this, you know, full master's here, and same with Columbia here, also has a master's in machine learning. And I think there are a few others now, since I've actually created these slides. So, yeah, these are legit degrees, which is really cool, and you can learn a lot, and you actually, you know, grow your network in the process. So, you know, last slide, just wanted to share a few kind of parting, call-to-action items for folks who want to get involved. So, number one, if you're a product manager in building an AI-first product, I think one of the coolest things you can do as a product manager is provide attribution to what makes your product possible. If you've ever seen Andre Carpathia at Tesla give a talk about how they built, for example, Autopilot, he will actually call up a paper, the authors, and, you know, really give attribution to the underlying research, you know, not necessarily their research, like they're building on others' research, and actually give attribution to those people. And I think that is what makes this community really amazing. And you should consider if you're building a product and there's components that you feel you can open source, like consider doing that, because it will help the community and help them more broadly. Second, consider spending time on challenges that you think can have an impact. As I mentioned, climate change is really interesting these days. There's lots of data science type problems. If you go to Kaggle, if you go to Eval AI, it's very popular right now. And these can actually have real impact in solving these problems. And this is where the power of a community can really come together. And, you know, you can also win money and you can have impact and you get those cool, you know, Kaggle qualifications, Grandmaster, et cetera, if they're really good, and so on. You know, if you're a researcher, for example, like open source of your code, open source of model, open source of data set, use the reproducibility checklist. If you search for that, my colleague, Joel Pinot, who splits time over at my trail, she and some folks are really passionate about reproducibility. And, you know, it's really, it's important for others to be able to build on your work in a seamless way. And then lastly, just find something you're passionate about and contribute to it. Whether it's an open source project, whether it's funding a climate-related challenge, you know, you don't have to be a hardcore coder to contribute to open source. You can build tutorials, you can help build better docs, you can build community and do meetups. There's all kinds of different ways to contribute. So I definitely implore folks to get involved. This is the foundation of how this community, you know, has been built in the culture of how it kind of operates. So it's a really passionate community. That is it. Thank you so much for attending. I really enjoyed giving this talk. And feel free to reach out to me. I've been a PM for a while now. If I can impart any pearls of wisdom, I'd be more than glad to. But thank you again for attending and please take care.