 hosting so much from MEDA to discuss PyTorch at the project and the PyTorch Foundation. So let's kick it off with a basic introduction. I'll start with myself. I run LFAI and data at the next foundation and as recently as a week ago, the PyTorch Foundation. I focus on supporting open source projects in the AI and data domain and enabling innovation and development in these projects. And with us is Sumit Shintala, Sumit. I'll pass the mic to you to introduce yourself. Thanks, Ibrahim. Hello, everyone. I'm Sumit. I've been working within the PyTorch community for the six plus years and we're pretty excited to make it part of an independent foundation. I've personally been an AI for the last 12 to 15 years. That's it. Maybe we should just get started. Okay, thank you. So two weeks ago, MEDA and the next foundation announced the transition of PyTorch, the open source project, to the next foundation under the umbrella of the newly established PyTorch Foundation. And I think to structure this conversation now kind of three parts. The first one is in relation to the PyTorch project itself. The second one is in relation to transitioning the project to the next foundation under the PyTorch Foundation umbrella. And the third part is in relation to kind of what's next now that the project sits in a neutral umbrella foundation. So let's kick it off with kind of the first part of the discussion. And I thought there's a lot of interest in AI, a lot of interest in PyTorch as the larger AI framework by interest and development activities. Maybe just give us an introduction, summit on the history of the PyTorch project, your involvement with it. And MEDA has done an incredible work in terms of growing the PyTorch community. So maybe you can give us kind of a history of that that led us to the point of where we are today with the PyTorch Foundation. Sure, it sounds good. It all started actually back in 2011 or so where we had a project called Torch, which was a predecessor to PyTorch that was written in Lua. It's called Torch 7. And it still exists. And Torch 7, I started using it, eventually started maintaining it. And there are a bunch of people from the Torch community who were also AI scientists at various places. And then we would be responsible for developing and improving Torch. Eventually what happened was Torch started getting a little bit outdated. I mean, science is like this, like you have new tools in a few years to become old tools because the ideas that are being expressed are different than they used to be. And so we were thinking about how to get Torch into a new era. And so we ripped out the back end of Torch and then we put a new front end to it that was written Python that was inspired by Chainr and Torch Autograd and various other packages at that time. And we called it PyTorch and we released it in 2017. And it was pretty centrally a community driven project that I mentioned is a bunch of us from the Torch community who came together to build this. And PyTorch over the first few years, one of the great things that happened to it was Meta invested a lot of resources and funding for people into PyTorch itself. And Meta also started noticing that it wasn't really properly registered to run and started securing PyTorch's ownership structure in a more formal, healthy way that you would want for a sustainable officers project. And as we grew PyTorch in parallel Meta was cleaning up the these, and then eventually the plan was that once we have the housing order to get this to an independent foundation and that's how we ended up with PyTorch started kicking off into the next foundation as an independent foundation. We continue to be largely driven by a multiple set of people across many companies and universities and independent people. And we have about 2,500 contributors and 300,000 plus downloads a day just on the PyPy channel and a lot more in aggregate. So that's roughly the summary of how it started, where we stand and how it came about. Thank you. So one of the very interesting aspects of the project is the community itself. I work at the next foundation been there for a long time and we do support our projects to build kind of communities and that takes a lot of efforts. And it's really, really distinguished that the project PyTorch has 24, 2500 active contributors. It is a dependency for tens of thousands of projects on GitHub, it's been deployed and being used by thousands of organizations. When you look back at the journey in the past six, seven years, what would be your maybe two or three kind of top advices in terms of what can we do as companies to start a project and grow it kind of following the success and lessons learned from the PyTorch project. Basically what helped you in building such an awesome community and on the project? I think there is no one size fits all first. Open source primarily, if you recognize, open source is not fundamentally driven by commercial incentives. People aren't like, oh, I want to make money so I'm gonna do open source. So open source is done as almost like a passion. Like it feels like the right thing to do because of some intrinsic motivation, right? So I think I can't give a label for how companies should do open source because it's not in the incentives of a company actually do open source, unless it's strategically relevant for them. So there are companies that are built around open source for security reasons or various other factors but generally does not true. So let me start with that, right? I think if your company is doing open source for whatever reason, because some people within the company are passionate about open source or it's important to a company, like let's take that as a given, but beyond that, I think like engaging with the open source community in general just needs like a change in mindset. So if your company with physical offices, it is very real that you're gonna be able to communicate with a much higher effective bandwidth to your colleagues than to your open source contributors who are not within your company. So that generally plays a big factor in creating a tiered world or not creating a tiered world of contributions where some people just naturally build a bandwidth available person to person and just get priority over many things and the other people will get discreet. So I think having that remote first mentality, like we're being very comfortable with long form textual communication and understanding that a bunch of contributors are not within the same tooling and stack as the contributors in your company within your tooling. I think that's a very important aspect and setting the culture early to counter this effectively, I think will help you scale your open source efforts greatly because contributors are not at your company, they'll feel like they are really part of the project and not just driving by. The other thing is open source generally doesn't come for free. Like one of the common things that people do is they develop something internally to their company and then they just throw it over the wall. They're like, oh cool, like this is now open source. This is my gift to the world, like R is in every anime, right? The truth is the needs of the product and the product building within your company are likely to be very different from the needs of people outside who are using your product. So I always say like think about an overhead of 30 to 40%. If you really want to passionately grow this project open source, if you have 10 people you should probably think about people's time percentage as like they're gonna spend 30 to 40% of the time on open source. And I don't mean you assign four people to do open source and six people to do like your product internally. I mean, every person individually will allocate like a significant portion of time and understanding how this product is used outside of your company in open source and then maybe crafting and building towards that as well. So those are, I think there's two main things I would flag as things people can learn from. Yeah, thank you. And I fully agree. In fact, just as an example when I used to be an engineering manager for open source at Samsung Research my advice to my team members is even if you're sitting in the same office you wanna talk to the engineer sitting next door don't just do that, you know use the common open source project channels so that everyone is aware and to build that sense of community. So certainly I kind of understand and support the idea you shared with us Thomas. You know, in terms of the project growth at the same time you are kind of building the project at Meta as an open source project you're also using it internally. Have you exchanged, have you kind of passed through times where you had different challenges in terms of scaling in terms of technical governance in terms of relationship with partners who are also using the project? You know, what are some of the challenges you have experienced as the project? You know, the open source project was becoming extremely popular both in academia and also in the corporate and enterprise world. Yeah, so I think in terms of there were a few challenges but like the first I would try to cover is in terms of scaling, right? So the project was started by about 18 people out of which two people were working full-time and everyone else was working after they had tucked their kids in bed like in the night or like basically like part-time passion project, right? So first thing that came about was PyTorch became a more popular than Torch ever was. So we were just generally overwhelmed by a number of GitHub issues or PR or things like that. One of the things we tried to do was try to be heroic about it. So we were like, oh, we're going to work harder and better and more efficiently and we're going to get that GitHub issue count to zero and we're going to implement all the features that people ever wanted and this is going to be amazing. And the truth is that was just not possible. I was reading 400 to 500 GitHub notifications every day and we're flying to every room and working like 14, 15, 16 hours a day. At some point, we started realizing that we were being idiots about this. Like it's just not possible. So we started actually prioritizing things. So I mean, that's just something that if you're scaling, you're going to come to terms with. It doesn't matter. If your project is scaling exponentially and your team size is scaling linearly, like you're just going to lose by definition. So we started prioritizing and triaging things and realizing that we will only be able to work on some things and try to work on the most impactful things pretty much. But I think like 2,500 contributors is obviously like a big number, but like the sustained and dedicated contributors is a much smaller number, right? Probably closer to like 400 or 500. And getting these people productively doing things that matter, I think that was just a big process. And like Edward Yang, one of another Pytorch core maintainers wrote this about how we operationalize and scale Pytorch. About technical governance and the friction or anything like that. Honestly, over the years Pytorch and the community of contributors and maintainers have had such high trust within each other that we did not really face this problem. We were always on the same page, like even the Pytorch had like, like Meta has the biggest team, but like there's Nvidia, Microsoft, there's various people from like various places and we always were on the same page with each other. Even as Meta started cleaning up things. So for example, Pytorch on the website just said, oh, just copyright Pytorch Contribute or something. And then eventually Meta made the changes to say copyright Meta open source. People didn't really say, oh, what the hell is going on? Like we were always on the same page. We knew that we were all working towards the same goal. So there was no misunderstanding that set. And I think I'm fairly blessed that we were like that and we didn't have partner disagreements and you various things like that. I think I just wanted to cover one other part. Like as Pytorch scaled, it is genuinely like an important problem to think about like the money, like where you get the money for resources, right? Like Pytorch is continuous integration just like your unit tests on every PR and stuff. Just the budget for that is seven figures. Maybe like, you know, in like some years was even closer to eight figures. So just genuinely this kind of money for resources and stuff. I think it's something as the lead maintainer of the project, I have to think of all kinds of creative ways of funding Pytorch and all that and I'm really blessed that Meta, you know, literally just supported most of these requests but you also generally have to think about when you have a project, keep it balanced. Like you can't just have a single source of funding and a single point of failure. You have to try to balance it out. And so over the years, we try to encourage many other companies to come in, join in and video Microsoft AMD, Amazon, Hugging Face, Lightning. They all are like starting like they've contributed over the years to have sustained teams around Pytorch. So that's something as well that I think you should think about when you're growing your open source project. Yeah, absolutely. And this is actually kind of a great segue into the second part where we talk about the Pytorch Foundation. So Meta invested a lot, they created the project. They grew it to become, you know, the fantastic project, the big community today in the process she faced different challenges in today or a couple of weeks ago. The project started its transition to the Pytorch Foundation under the Linux Foundation. So in your own word and communicating that to the people watching us today and I see some question as well and to the people who would listen to this, in your own word, how would you describe the Pytorch Foundation for the kind of, the listener out there who's interested in Pytorch and maybe an existing contributor? Sure, yeah, I think there's a lot of confusion as to what the Pytorch Foundation is and what it entails. I see a couple of questions in the Q&A as well which are kind of understand what the Foundation means and will the employees work on internal method? I like, so let me try to clarify quickly. So the Pytorch Foundation stashes all of the assets and basically what I mean by assets is like logo and GitHub organization and things like that, all Pytorch. So they own the copyright, the brand and stuff. The Pytorch Foundation is at the moment not really like, oh, it doesn't have employees to contribute to Pytorch. Like, there are various companies who are funding teams to work on Pytorch and that will continue. The Foundation makes sure the brand Pytorch is not in one centralized company's control and that the brand is itself acutably, like the decisions on the brand are notable among all the board members. The technical governance of Pytorch. So you would think, oh, okay, cool, this all sounds good. So these five partners, can they tell who will be the next maintainer or what features should be implemented? Like, can these five companies come in and say, oh, implement this, implement that? That's not true either. So the Pytorch Foundation is gonna hold the business governance of Pytorch. And then there's the technical governance of Pytorch and technical governance, I mean, contributing to Pytorch, the maintaining Pytorch, like having right access to the Pytorch code base, figuring out what features to build next, what to prioritize. All of the technical governance is completely separate and is given to individuals. It is not given to companies based on how much they invested in the Pytorch Foundation or anything like that. So as an individual, you're given maintainer status and a lot of these individuals are at Microsoft and media and that's a natural consequence of the fact that these individuals were contributing to Pytorch and eventually they were looking over full-time opportunity and these companies were offering opportunities to work on Pytorch full-time. But even if they leave any of these companies, they will still have right access and they will still like continue working on Pytorch if they feel like it. I think that's like an important point to make here. And I think it's very similar to how the Linux as a project is held within the Linux Foundation as well, like how it's run overall and its technical governance. Yes, excellent. So I will summarize, there was a lot to unpack. So basically as a summary, the Pytorch Foundation is basically a foundation and entity that holds the project assets. And these assets, as Somit mentioned, they include the website, GitHub org, any social media accounts, the trademark of the project, et cetera, et cetera. And the Foundation acts as a funding body for the project. So when the project wants to spend money on cloud resources, on infrastructure, on events, et cetera, et cetera, it is the foundation that offers the funding for the project. And again, as Somit mentioned, there is a clear separation of governance. So basically there is the technical governance, which is how the project development happens. And this is completely separate and independent from how the foundation operates. So a member in the foundation cannot go and say, hey, can you please accept this poor request or anything like that. The members of the foundation are there to provide funding and support for the project. And the technical development happens completely in an independent way, following the project's technical governance. And this is the model that we apply in all of our projects and all of our umbrella foundations within the next foundation. For instance, Iron, Alpha, and Data, and we have exactly the same model. We have the foundation governance and every project we host have their own separate technical governance. And it works extremely, extremely well. And we're able to gather and raise funds for the project, for the project's needs, and it's a successful model to work with. So I think, Somit, now that we kind of identified what's the Pytorch Foundation, what does it do? I think a very interesting question that I'm pretty sure people are thinking of is, what is the impact of that transition? Transitioning the project into the foundation has on the development activities. Now, I know the answer, but I'd like you kind of to highlight that to our audience today. So yeah, transitioning the project into the foundation doesn't really change developer activity at all. Like everything remains roughly the same. It's just been, in some sense, we've done the technical debt cleanup of the organization, right? So we've basically been doing all of what we were doing without really formalizing it, without really writing it down anywhere. For example, our technical governance was never codified in the projects, by-law, there and so on. And now, I think we roughly have a structure. We have like a very clear understanding and a lot more transparency in how we're taking decisions and that like for every critical decision we make, we would actually post how we did the decision, the notes and stuff publicly. But that's roughly, I think, improving. So nothing really changes directionally. It's just that we're just gonna be operating a lot more structured and transparently. Yeah, thank you. And actually, I get that question a lot when I interact with organizations, especially to talk about on, when we host our project with the foundation, how's that gonna impact the workflow of our developer or the activities? And my answer is no impact. The only change you might see is just that you have to be more persistent in applying the open governance, versus just taking it ad hoc. So there's the question I'd like to address. Does it mean now that the foundation employees won't work on internal meta task or will it be to separate jobs for them? So today, there's myself who's kind of considered a foundation employee. I act as the executive director or the manager of the foundation. I do not work for Meta. I work for the next foundation. My paycheck comes from the next foundation and I am a resource in support of the foundation activities. Basically, I work to support the project I'm responsible for bringing in fund in supporting the requests of the project when it comes to marketing activities, any activities in relation to the trademark, any activities that require support, I'm responsible to help the project execute on that support and fulfill it. So to address the question, none of the foundation employees will work for Meta or find any other foundation numbers. We work for the next foundation in support of the PyTorch project there. And the last question that I see here is the PyTorch foundation structure will be public. It is a nonprofit organization. It is a public organization and already all the maintainers and members of the project, the roadmap, everything is and was already public. So there's no change there. All of that information is available from the PyTorch GitHub account or GitHub org. And also if you go to PyTorch.org, which is kind of the main entry to the project, you can find all that information available from the website as well. Yeah, I just want to say the maintainer list is available in the PyTorch documentation in the page Persistence of Interest. And there is a page called Governance or Technical Governance that describes the structure of the technical governance as well. It's again in the PyTorch documentation. Exactly. And one of the impressive things that I found about the project is the abundance of documentation. Obviously there has been a lot of work put into this. So, Sumit, if you are to give advice to people who are interested in getting involved in the project, but it is a very massive project, a lot of pieces, different building elements. And it might be a little bit intimidating to start. What advice would you give them on how to get involved and how to get started? Yeah, I think the first step is to use PyTorch. I think if you're interested in PyTorch itself, then use it and build something cool with the build a project on top of PyTorch right now, for example, there's 150,000 projects who build on top of PyTorch and have registered so on GitHub saying that. But out of them, obviously the valuable ones are maybe hundreds of them, right? I think you genuinely can help the PyTorch community by building something valuable and interesting and cool for and within the PyTorch community. That would be, I think, your first gateway entry into the PyTorch community where you feel like you are a good community member and other people are appreciating what you're doing. Actually, the core PyTorch project itself, most people wouldn't need to or want to or have to contribute, but if you are inclined to, then we have a contributing.md page and also in the PyTorch thing, there's a markdown page for contributing. That's a good thing to read, but also we have issues that are marked as good first issues. So if you actually search our issues for the tags in our issues, there's a tag called good first issue and you can have a look at these issues as the way to try to enter into the core PyTorch project more gently. We also have a lot of existing peripheral projects within the ecosystem that are always looking for contributions. I think, so there's Torch Vision, Torch Audio, Torch Text, Hugging Base, Lightning, they're all within the PyTorch ecosystem and they're also always looking for contributors. So it really depends on what you want to do and they're all kind of very open and community driven and you should come join in on any of these. Thank you. I think the idea to have issues for first time contributor is just fantastic. So kind of beyond just using PyTorch just to be kind of the next step, hey, now that you've used the project and you have a good idea on how it works, here are some entry points issues that you can start working on if you're interested in contributing. There's a question from Alexander Zubian and I think it fits kind of the progression of the talk today. It's in relation to how the roadmap of the project is being defined and how prioritizing features happen if you're able to give us some highlights on this. Yeah, so PyTorch is run as a product. One of the things I would distinctly tell you is there, I talked a lot about how we're gonna make decisions fairly transparent and we're gonna try to outline what we're doing but what we are not aiming to do for example and this is easily confused with each other is we're not trying to design by committee. We're not trying to say, oh, for what features do we do next? Okay, let's have a committee of people decide that. It's very much a hierarchical product development cycle where like each maintainer has opinions on what to prioritize and they get all rolled up and then the core maintainers kind of give guidance and stuff like that. We generally have a fairly independent thing here. So feature development, what gets prioritized is stuff we generally make public as part of posts like as we're developing them. So like we have a developer million it's called devdiscuss.pytorch.org and over there we routinely post our thinking our roadmap or updates and we're gonna increase it like and now we started a monthly maintainers meeting for which we are transparently posting the meeting notes and how we're thinking. And how we actually like, I mean, I told you like the bureaucratic like, okay, how do we like roughly think about the decision-making process but like to give you like a more concrete answer PyTorch is always run very pragmatically. We take a holistic view of all of our users all of our ecosystem and all the feedback we get from all these channels from like various kinds of users various other developers within the Python data science ecosystem. And then we figure out then how best to take the product forward. So roughly we are very pragmatic project in that we're not trying to promote a particular technical direction or particular philosophy or particular idea we are very like pragmatic to the feedback loop we have from a variety of our users and that's how we prioritize various features and such. Thank you. And I was going to ask you about kind of the relationship between PyTorch as a project and academia in general because when you look at the adoption or the project as being embraced by academia it's being used in universities as a teaching tool it's being published amazing numbers in terms of publications citing PyTorch as the project. So what are the characteristics that put PyTorch so much ahead of other AI frameworks specifically in the academia that's one and following to on that question there are a few questions basically on how do you see the research happening in PyTorch after they moved to the foundation that's kind of correlated together. Yeah. Academia thing is very interesting. We like I'll tell you like why I think we became the standard in academia. It might be like comically simple and like so basically we were the first machine learning framework to ship a binary or first deep learning framework to ship a binary that you literally just pip install and it just started working without any without installing any other system packages. So all the other packages at the time for example were like oh you installed this version of CUDA on your system you installed this version of that software on your system and then you pip install what we have or some of them were like then you built from source what we have. So it was like a mix of like packaging troubles and like we spent a lot of time honing the packaging story. Like that was like I think in retrospect like we spent a lot of time on that because we knew from our previous time in torch and we knew like what general pain points people had that people just had trouble even getting going. All right like that's like the first step and it's like a simple thing. It's not a very interesting to work on for someone who wants to work on like just feature development or something. So and I think there's a especially susceptible factor for students and academics who generally don't like if you're an ML developers like ML student, right? You don't like you don't know fully like building from like build system and building from source and C++ and CUDA and all of that. So I think we just thought about bringing the barrier of entry to users making it much lower than it ever was and it's sort of a combination of packaging a solid documentation. The third factor I believe strongly was backward compatibility. You actually take code from PyTorch 0.3 or 0.4 it still works in the latest PyTorch release and people really care about that stability and our competitors. Generally like the way I can tell you is like when you're in a company especially a large company, right? You don't care too much about backward compatibility because you generally work in a model or repo. So if you're making a change you can make the change for every usage of your chain like the API within the entire code base of this large company. So you don't really want to be held back onto like years and years of backward compatibility. So and we fairly counter-contour to like Meta like said, no, we just want to be backward compatible for as long as we can. Like because that's something that a lot of people would care about outside of Meta as like from our open source community. So I think these two or three things which are not even related to PyTorch the product in terms of features or whatever probably made the barrier of entry and the understanding of stability like a lot better for this market of academics that you were talking about that and that I think is like a really good reason why it became popular. There's also the ease of use and all of that but I like and we were like, you know good at ease of use and all of that but I think we weren't the only package that was easy to use at that time. So I don't think this is the differentiating factor. And actually I listened to your podcast about a couple of weeks ago with Mark Miller and what I found really interesting in relation to this topic is, you know when you were at the university in New York and you were working on Torch and then transition to PyTorch maybe it's kind of in your subconscious that as you were working on PyTorch you were thinking, hey I want to lower the barrier to enter PyTorch for all the students the ones that you were in similar position with a few years back. So maybe that kind of the exposure to Torch and the transition to PyTorch as a math at the university graduate student helped give you that kind of inspiration to make the project more accessible to academia. It wasn't even like that it was much more simplistic. And I do I definitely not a visionary like that like I was my main incentive was to answer less questions on our forums. I was just like, I'm tired of answering all these install issues questions. Maybe like we should just package better so that we don't get these questions. That was how it ran. So it's just kind of a typical lazy computer scientist. So I think one of the interesting questions we have is when you look at the existing landscape of open source AI frameworks at any point in time even today there's between like 12 to 15 of them and we track them on our open source AI and data landscape. So when you look at so many different projects trying to kind of provide solutions within that deep learning slash machine learning space how do you see that space changing for the next two, three, four years as tools become more advanced tools gain more adoption? And what's your view on how that space would look like in a few years from today? I give a 45 minute talk about this so I'll try to summarize it more quickly. I think there is it really like I don't think it's an obvious answer. Like I think it's a distribution of directions that we can go in because it really depends on like how the AI landscape itself changes. So the summary of my answer is if we don't need as much of a general AI framework because we've settled on fewer kinds of methods and sets of methods then we're gonna start seeing a lot more specialized and narrower packages that implement these methods like in terms of like a vertical engine. For example, scikit-learn actually boosts today are really good at implementing that on our two methods in a very vertically integrated way without you can just use that method and throw your data at it and do a little bit of tuning but you can't really customize that method from the scratch. I think like if neural networks and this stuff becomes a lot more specialized and consolidated then we're gonna see more of these packages taking over rather than general purpose packages like Fidework or TensorFlow. We could also see a completely new paradigm show up like basically we say, oh dense tensors, we don't need them anymore because it's new ML method is using sparse tensor or like it's just not even using cancer it's using some kind of trees. Then we're gonna have to reset our expectations on what the ideal package is and a bunch of packages might pop up and try to win over the market. So I think there's also like the third part that I talked about which is I think ML right now is collaborative on the source code but not collaborative on the model. So you can't before develop a model in a distributed fashion be like, oh yeah, I made this incremental change to this model, here's the diff like and someone approves the diff like it looks at the diff and I guess we don't have that in the model part like we don't have that in an actively developed model which invades we only have that in the code and structure of the model or like generally just the source code of training. I think if we are able to get further along in how to actively collaborate and code develop models I think we're gonna open up a big, big like we're gonna see like the GitHub of ML is gonna be very different from GitHub for code in my opinion. And like it depends on whether we may progress on understanding how to like do collaborative modeling and I think there's a lot of effort in getting there but we're not there yet. So I think these are all things that can happen that will change the landscape and some of them pytorch survives and some of them pytorch doesn't I think it's TBD to be decided. Yeah, thank you. And I think there's got a lot kind of the looking at the ecosystem and seeing that a lot of organization see a lot of values in models and the data and the apps built on top and they tend to open source the frameworks and the libraries and the tool beneath that to build communities around them and try to build an ecosystem in support of their apps on top of that. So it actually correlates pretty well. So we have a question that might extend to your answer just now summit. It may take us a little bit on the technical side but not too deep. And the question is, what would be the best fit use case for pytorch? There are a lot of frameworks and tools that can be used in multiple ways multiple directions. But in the case of pytorch what would be kind of the most ideal use case that pytorch kind of wise and born to kind of solve such use cases? Yeah, so pytorch itself you can think of as first like a base framework to do state of the art AI research. So pytorch I think is best used when you need a lot of control and run it and do it building out city of the art neural networks and other differentiable techniques. But there's pytorch the ecosystem, right? So on top of pytorch there's various products and things that are built that make doing something distinctively easy. So you take, for example, hugging face they built a series of APIs on top of pytorch that makes it easy to do natural language processing, state of the art natural language processing and like one or two lines of code like similar to say scikit-learn wrapping in state of the art methods. So I think for the core pytorch foundation the best use case is if you're doing state of the art AI and you need a lot of control otherwise if you're mainly someone who's starting out who doesn't know too much about AI or pytorch you're better off starting off with something on built on top of pytorch that is trying to facilitate the more starter friendly things. Maybe I would recommend hugging face for example but there's a few others like Scorch which is like a scikit-learn like wrapper around pytorch. I can recommend like for depending on a domain I can recommend things as a cornea or like I think there's like yeah, it really depends on what you are looking to do there's something within the pytorch ecosystem that's probably built on top of pytorch and that's much more catered to someone who's starting out. Yeah, thank you. And as part of the ongoing efforts are there any activities or plans to grow the project in the new areas such as investing in support of constant device and this is a question coming from the Q&A as well. Yeah, so I'm trying to find the question. This is from Stefano Fabri. It is from an anonymous attendee and I think it has to do with the fact that there are kind of silicone sub-organizations on the foundation. Yeah. Yeah, so mainly we're thinking the foundation and the stakeholders are very interested in growing pytorch and the ecosystem. Like I think the foundation was such a natural transition that I don't expect us to do anything more or less than we were already doing but what I do expect pytorch to continue being doing is like increase its coverage for hardware backends, like increase, like introduce various state of the art compiler technology. Like we're gonna be doing a bunch of this stuff and we're gonna be working very closely with various stakeholders who are best at doing that. Like we're gonna wanna work closely with AMD and Maria, Intel, a bunch of the other hardware startups like Google for TPU, but also like Cerubra, Scrap for Habana, Samba Nova. I mean, like we wanna work with everyone in defining what is like their entry point in creating a pytorch backend and so that if a user comes in and they're like, oh, I'm gonna use pytorch and like I have this accelerator we're like, yeah, it's ready, right? So like we wanna continue doing that. We wanna integrate with pytorch as a first class smooth integration with all the cloud provider tooling like gonna launch like a pytorch distributed job and a thousand GPUs on AWS. Like is that smooth? Is that like well oiled? Like we work closely with AWS TCP as you're on aspects like these. All these things will continue and like we have so many other dimensions that we're working on as well. Like these are just the cloud providers and hardware vendors but we expect to do a lot of new product development to catch up with the field and where we're not doing as well. But we expect to do all of these things pretty much. I think it's a very broad answer. I didn't really give an opinion but that's roughly how we're thinking. Yeah, and I think pointing people to the GitHub org and to like roadmap information on GitHub would kind of give them a starting point to know what's happening and what's coming as new features into the project. So we have a question from Suraf that about the availability of training for pytorch and I'm just going to take a little stab at the question about two weeks ago when we announced the transitioning of the project to the pytorch foundation the next foundation and pytorch foundation announced a new course that is available for free from the next foundation catalog of courses. The course is pytorch for AI decision makers. That's a free course and we are actually in the progress to creating new content and new courses to be made available via our LFX platform via the next foundation for anyone interested to learn about pytorch. So this is definitely a priority to make that knowledge and that training available to as many people as possible. And of course, Sumit just, I think this is a priority for you too and for all the participant companies in the pytorch foundation. I'll take a quick stab at answering some of the other questions. So will there be better support for non-NVIDIA hardware by OpenCL? OpenCL itself, probably no. There's a GitHub issue. You can look at a search on the pytorch issue search for OpenCL, it'll pop up. But like, OpenCL is something that in my fairly opinionated answer, something that all the hardware companies embraced, but embraced poorly. Like none of them support OpenCL well. Like if you actually go look at the code path, just the driver latency, like just how long did it take to launch an OpenCL kernel? Something as fundamental as that is just not very good. And then none of the companies are committed to fixing that over time. And we know this, we talk to the companies, like we've run down the OpenCL path pretty seriously at some point. So not OpenCL, but like all these companies, the reason they're not supporting OpenCL as much is because they all have their own software stacks that they're like optimizing. It's like a key differentiator for them. Like they're trying to create value via their own like close source or like in some cases just proprietary software stacks. And some of them are open source but proprietary. But we're trying to integrate with those because that's what the hardware vendors want to work on with us. That's like a very pragmatic thing. The reason I am bringing this up is because there's a lot of conspiracy theories on the GitHub issues about why we don't work with OpenCL and we want to like have hardware vendor X or Y be favorited or something. That's not true. We tried very hard to work to get OpenCL support and it just, the hardware vendors privately are just not really enthusiastic about fixing all the issues there are there publicly. I'm sure they're gonna put a brave face about it. Will there be increased coordinations with other Linux foundation projects such as Onyx chaos? Well, we already like work pretty closely with Onyx. We have maintainers who are working on PyTorch Onyx integration and stuff. I don't think it's going to be a function of whether these projects are Linux foundation projects or not. I don't think we're gonna have any kind of connections being made because they're Linux foundation projects but we will work with like whatever makes sense pretty much. Why is PyTorch the most loved deep learning framework? I would actually say, I wouldn't call it the most loved deep learning framework. I think it's just one of the more popular frameworks. I just don't want to categorically say that's the most or anything like that. Like the numbers don't always support it. Like it's used a lot in some recent segments such as research is not used as much in other segments. So like, I'm gonna just answer the question by just saying, I don't know if I even agree with the premise. But like, I try to cover some of the reasons why PyTorch is loved and lower barrier of entry around packaging and user-based documentation and back to the compatibility. Just ease of use of the API is just like, we call this eager mode, which is like, what you run is what you get. And it's just like, easy to reason about it. Those are all the things I think that I think of. Amel had a question. I'm wondering then, wouldn't it be in a way that the sponsor or some other company's resources and connects it to the foundation? I think this is not, so the reality, Amel, and I'll try to answer it as candidly as possible is when a company thinks of making a donation versus when a company thinks of having something funded internally for its own needs, you get very different numbers. The companies would be open to donating $50,000 a year, but not $5 million a year. But if the same company has to build like a team internally or some funding internally for say, having contractors do some of this work, say from Quonsai, the budgets are like way bigger. So I don't think this donation model works as well. Of course, over time, like one of the things we're trying to work on with the foundation is trying to figure out if the foundation can just be more sustainably sourced, right? Like whether the resources it gets can over time build maybe things like grants or funding and things like that. We're very far away from that. Like we want to get there, but it's not easy. And like the way you pose it, currently is not really possible just from the reality of how companies work, especially publicly related companies. Hopefully I answered that as candidly as I can, because like this gets lost a lot when people just think about it without having the context. Like you go to like your CTO or your funding and you ask for like $5 million for donating to NumPy, they're just gonna laugh you out of the room. It doesn't matter how big the company is. That's just the reality of the situation. I think we're kind of at time. So I'll try to pick particular questions. I think the license of fight or choice card is already, like you can check it out. I saw I'm not gonna answer that. Full-time contributors we have, as I estimated already, we have about, I would say 400 to 500 across the world, like full-time contributors out of which around 200-ish are at Meta. And there's like 30 to 50, there's like two to 5% teams at various other companies that I'll maybe add up to the rest. Maybe I'm maybe closer to 400. That's roughly how I would estimate. I don't expect the forecast to be any different. Like I think roughly, we might go with linearity. So maybe we'll see like the 10%, 13% growth over time, but I don't expect, I don't know. Like I think it's important to talk about full-time contributors or what, like for the core PyTorch project, I don't expect us to have more than linear growth, but like the PyTorch ecosystem is expanding exponentially and will continue. So I think ecosystem contributors by building their own project or building value in the community are as important. And that stuff is like expanding crazily. So I just want to make that differentiation. Yeah, so I think maybe we should end here because we're out of time. Yeah. So thank you very much everyone who attended this live and thank you very much Summit and the Meta and PyTorch and the Linux Foundation team for putting this together. I believe this will be recorded or it is recorded and will be posted online at some point. And I pass it back to the event organizer at the Linux Foundation. Thank you. Thank you both so much for your time today and thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.