 Good morning, everyone, and welcome to this LFAI project. Thank you for joining us today. We have a few presentations with a number of speakers touching on several areas as they relate to LFAI. This presentation right now was scheduled to be presented by my colleague, Jacqueline Serafin, who is unfortunately unable to attend the presentation. So I am Ibrahim Haddad. I'm the Executive Director of LFAI, and I'll be walking you through the LFAI presentation. So this is about 15 to 20 minutes. I will cover some basics on LFAI, our progress so far, how to get involved, and various updates across the various work we have been doing. As you may be aware, LFAI was established in 2018, so we are about two and a half years almost as an umbrella foundation under the Linux Foundation. And if you think of the LFAI, the Linux Foundation, it's our parent foundation. And under the LFAI, there are a number of umbrella foundations. There's LFAI, we focus on open source AI. There are CNCF, focuses on open source cloud native. There are Hyperledger, LFAG, LFA Networking, and so on. And we all share the same goal of advancing open source technical projects and innovation, each in their specific domains. So from that perspective in LFAI, we focus on increasing collaboration and advancing innovation and development in open source projects in the AI space that covers machine learning, deep learning, MLP, notebooks, data, models, and many other subcategories and edX. And I will touch on this later in a following slide. So this is one of my favorite slides that I use in almost all of my presentations talking about LFAI. And it illustrates basically the very growing ecosystem of open source AI. This is the LFAI landscape, which covers all the top tier projects categorized in the different categories. And this is actually an interactive landscape. So if you visit the website, HTTPS, column forward forward slash landscape.LFAI.Foundation, you'll be able to actually click on each of these projects and get a lot of information from them. And the reason I would like to cover this and I cover that landscape in all of my presentation is that it gives us a lot of indications on the different categories in the open source AI space. You can see I think we have nine or 10 different categories. And then we have subcategories inside each of these major categories. And then the number of projects that are kind of what are considered top tier projects within each of these subcategories. And you can see it's a very thriving ecosystem with a lot of projects. Basically these projects, I think we have a little around 250 projects right now in the landscape with 30,000 plus developer actively contributing to this project and hundreds of million lines of code. So the space, the open source AI space is extremely healthy growing and on a weekly basis, we're adding projects into the landscape. So please visit it and have a look. So having looked at the landscape, one of the challenges that we noticed in general in the open source AI ecosystem is related to fragmentation. So you can see a lot of projects that have a lot of efforts ongoing, somewhat replicating efforts. Basically there's a lack of integration across a lot of these projects. And the second layer of challenges relates to governance challenges. A lot of these projects may not have a governance or have a governance that is related and primarily focused on the project creator as an organization or the project backer. Some of the challenges are related to the projects being originally proprietary internal efforts that were created as a specific need for a certain product and they were open source from there to benefit from the network ecosystem that the open source offer. And other challenges within that ecosystem is that as the project grows, it becomes really hard for a number of companies to figure out who's going to manage the trademark, who's going to own the website, who's going to provide the IT services, who's going to support the marketing efforts and putting all of these challenges together poses kind of a glass ceiling for project investment. So if you are a company that is looking to invest and adopt an open source project, you would ideally prefer a project that has an open governance project where all the IP assets are kind of in a safe haven, a project that welcomes people and allows them to grow with the project versus participating in a project with a lot of question marks on whether your investment in terms of resources in the project will lead to you getting kind of a seat on the technical decision. And all of these challenges actually are what led to the creation of LFAI. Would the goal to harmonize that space and provide a number of support elements for the general open source AI ecosystem? So I have these slides that relate to motivation for harmonization. So basically our efforts go into different directions. One of them is to provide integration and interoperability capability across the different projects. And we have presentation coming up later on during this mini-summit that will target and provide more details on this. As a member of a different organization, of course we would like to provide the greater efficiencies for our member. So we are a member of a different organization and we support a number of projects used by our members, resulting in a lot of efficiencies across marketing, support, legal, support and many other areas, IT, infrastructure and so on. And at the same time, we aim to unify guidance for our end users in terms of interoperability, in terms of integration efforts, standards and what the future lies for AI, data and analytics, as these technologies are being adopted virtually by every single industry out there. And of course from a hosting perspective, hosting projects under a single umbrella allows a really a tighter amount of collaboration across these projects, across the communities, as they use the same services as we bring them together across various committees to collaborate and provide integration points and provide APIs across and towards each other. So from a structural perspective, LFAI follows basically the template structure for any other umbrella foundation. So you can see there on the screen on the left hand side you have the governing body. It's called governing board, which is basically the funding governance. This is where companies become members of the LFAI and they provide funding for the organization. And under that governing board we have a number of committees. We have the outreach committee focused on communication, PR, marketing, event support and so on. We have the legal committee. We have a site committee and a budget committee. The technical coordination happens and is driven by the technical advisory council, TAC. And actually later on today we have a presentation from Jim Sporer who is the chair of the TAC. And the TAC in my opinion is extremely important in any kind of foundational effort. LFAI is an example. As this is where all core technical efforts are being driven. Under the TAC we have the ML workflow and the interoperability effort and the trusted data effort and both of them will be covered in a coming up presentation right after this presentation. So the TAC technical advisory council is the council that represents our member and is responsible for coordination across all the projects. Responsible for inviting and accepting or voting in projects to become incubated projects in the foundation and also responsible for promoting a project from incubation into graduation. So it is an extremely important body that functions and drives LFAI for. And on the right-hand side we have the hosted projects. So today we have 16 hosted projects. Three of them on the top level are graduate projects, Onyx, Angel and Achimus and we have a number of projects that are incubation. So now not all of them are on the slide because some of them are not announced yet. So if you follow us on our LFAI blog available from LFAI.foundation.plog you will be able to find and be updated on new announcements in relation to incoming projects. So this is basically the basic structure and there is a complete separation between the funding governance or the governance of the foundation and the governance of the project. So you can see there are a number of projects on the slide. Each of them have their own governance and each of them decides how to operate and how to manage themselves completely separate from one the other projects and of course the management and the governance of the foundation itself. So projects are completely independent from that perspective and there is a complete separation between the project and the operation of the foundation. In terms of details on the governance so basically the charter is available online. This is just a pointer slide. You can visit LFAI.foundation and download the charter and review it at your own time. In terms of dedicated staff Jacqueline Seraphin is the lead project manager myself I'm the executive director and you have a number of staff that support us in terms of creative work in terms of IT in terms of legal support and many other services that are basically in support of the foundation and all the efforts that we undertake. As I mentioned we have a number of efforts ongoing. This is basically the major areas so providing a neutral environment for our projects. This is one we are vendor neutral, we are not for profit, we host, we provide open and fair governance not just for the foundation but also for the projects in terms of harmonization and interoperability. We have an effort in that space to ensure interoperability and create integration points across the projects that we host but also between the projects that we host and other projects used by our members. We have a major effort on trusted AI and I will not talk much on this topic because we have specific presentation on this. We've recently welcomed three new projects from IBM to be hosted under the LFAI that are in the trusted and responsible AI space. So we will have a speaker Ali Meshing who will cover this area in detail during this meeting summit. We have an effort focused on data models and marketplace so I kind of combined all these three together under one bucket just to make it easier to create the slide and of course we provide funding and we provide a number of efforts in terms of marketing and awareness in general to support the projects to support the different events. Unfortunately for this year we moved into a virtual model but in 2019 2018 we were holding events and summits face to face and of course we provide marketing services, project management services, event services and all of that. So our goal is really to support our incubation projects to drive them to graduation and to have them widely adopted and to grow with user-based and culturally based. So you can see this is the timeline of the foundation it's a short timeline a little less than two and a half years and you can see almost every single month we have major events happening. Even for June for now you can see a couple entries but we actually welcome three new projects and we have another virtual event and unfortunately it wasn't enough time to update the slide for this event so this slide is also available from LFAI if you'd like to download it and use it for your own presentation. In terms of membership we started with 10 members that were the core members that launched LFAI as an effort. Today we are 24-25 members and we have three tiers of membership we have the premier tier which is basically a membership level that provides you a board seat we have the general level of membership and we have what we call the associate member level which is a free membership for universities, not-for-profit organization so if you work with the university on R&D focused on open source with open source AI if you work for government labs if you work at not-for-profit R&D organization you are more than welcome to join us and become an associate member and have the same benefit as the general membership but with zero fees associated with that. So I mentioned briefly earlier the projects this might look redundant slide so we have two levels of projects the graduation project and incubation project most projects come into the foundation as an incubation project the process is very simple projects get to speak and present to the technical advisory council and then that committee supports the project into incubation and then we have milestones, we onboard the projects to become integrated with our services of the foundation and we support the projects and as they start meeting our graduation criteria which are published on our website then they go back to the TAC do the presentation validating that they met the criteria to graduate and then they graduate and then at that point we make an announcement and then they become eligible for additional funding pending the board. So a lot of details is actually available on the website LFAI.foundation and the TAC as I mentioned it's an open committee it's available for anyone to join and participate in it you can actually attend all the bi-weekly calls we conduct conference calling similar to this one every two weeks and these are open to the public and you're more than welcome to attend there are really extremely interesting presentations from different companies from different speakers talking about different areas of open source AI and these calls are typically booked 46 weeks in advance so if you're interested to present and speak to our community about an open source project in that space in the open source AI space or if you would like to propose a project to be incubated keep in mind the kind of the backlog that we work with and of course the link is available for additional details and please feel free to connect with me directly to discuss further on this additional resources I will skip this slide for the purpose of time we have a lot of information available on LFAI.foundation for details on what services we offer for our hosted project how do we help them and how the project cycle looks like when it's hosted in the foundation one very interesting slide I would like to show you is who's hosting projects with us so as I mentioned we have 16 hosted project in LFAI today and we're only about two and a half years old and we have these companies from Amazon to Microsoft, IBM Samsung and we have giants from China who are focused on AI in terms of Baidu, Tencent and DTE and others and Zillis so we have really a great set of companies who look at LFAI as an engine to help support the projects to help grow the projects to provide projects with an open governance that encourages other people to participate and continue in the project with the aim to grow a community to grow an ecosystem. Building ecosystems is extremely challenging a single company cannot do it together and this is why companies come to the next foundation LFAI in this specific case and work with us to provide different pieces of this ecosystem in different categories so that we can build and provide end to end solution that are based on open source project all of them are foundation based project with an open governance with assets owned by the foundation and not tentatively and possibly used in the future against some of the users and further to that we are not just working with and collaborating with commercial companies we have I believe six or seven non-profit organizations who are members of the foundation as well as universities and the projects we host have really a large number of contributors from different universities and on this slide you can see the number of universities and the caliber of the universities participating in our project so if you are a member of a university whether you're a professor or a graduate student or a student and looking to get involved with LFAI we have a whole category specifically designated for such participant and we certainly welcome your contributions to the foundation to the various committees and also to the technical projects that we host so coming to an end we have a number of areas and channels that you can connect with LFAI we're active on social media via Twitter and LinkedIn we have many of our mailing lists are actually open to the public you don't have to be a member of LFAI to participate in so you can participate in the technical advisory council all the meetings you can participate in and look up our events calendaring and mailing lists you can participate in various mailing lists you see on the slide and also we have a blog where you can follow up on our news and new projects coming in projects being promoted to graduation and so on and as a last slide you can see on the slide some pointers on how we can reach us via email you can see us visit us on GitHub so almost exactly on time 20 minutes I thank you for participating in this virtual event LFAI mini summit and I would like to pass this slide to the next speaker Jim Sporer who will be presenting on the technical advisory council from LFAI and thank you very much great thank you Ibrahim and thank you everyone for joining it's really a pleasure to present to you today and give you an update on the Linux Foundation AI technical advisory council I'm Jim Sporer from IBM I was recently this is an elected position I was recently elected to be this chair person over her money previously had been the technical advisory committee chair and it's a great pleasure to be in the chair role as Ibrahim said the TAC calls happen bi-weekly so every other week we have a TAC call it's open to anyone with an interest in open source data and AI and analytics as Ibrahim said so I would just like to mention that in my day job I'm the director of the cognitive open tech group IBM and that's part of our developer ecosystem community I'm based in California the IBM Alamedin Research Center we also have our center for open source data and AI based at Watson West in San Francisco and I'll make sure that this portion of the slides are up on my slideshare please do feel free to reach out contact me with any questions that you have and I'd be happy to follow up there's a lot of interest in as Ibrahim said in data and AI these days in enterprises in fact I can't think of a single enterprise that doesn't have some interest in all of the advancements that are taking place so it's a great time to get involved and I think joining Linux Foundation AI TAC calls is really a fantastic way to start building your network and getting engaged in this community this slide is not only Ibrahim's favorite slide that the Linux Foundation AI has created but it's also my favorite and I have to say it's also very popular with a number of our IBM executives and fellows I just want to say everyone is welcome to these TAC calls and I imagine if you take a look at this Linux Foundation AI landscape there's probably some project in there that you or your organization is familiar with and so there's a lot of touch points possible and what we try to do on the TAC call is invite people who are engaged in specific projects to present about their projects people with specific questions use cases all people are invited to these TAC calls and I urge you and your colleagues to take a look at it we of course all agree to a code of conduct and I've got some links again I'll share these slides on my slide share links that go out through our code of conduct in this landscape so the next question is who is welcome to present at these bi-weekly TAC calls well certainly people with an interest in the LFA landscape and LFAI community but really we're looking for presentations about anything related to data in AI open source projects and the landscape is huge over 250 projects and some examples of the TAC calls that we've had over the years April 20th 2018 was the founding members discussing all the processes and documents that there's multiple hyperlinks out there now for all of these there's a huge amount of work that went into Linux Foundation AI making sure that it was had all sorts of appropriate processes onboarding projects all the documentation everything from the code of conduct to you know procedures for helping projects move from an incubation level to a graduated level and part of that graduating process is building a community of others around that project and I think Linux Foundation in general does it fantastic and Linux Foundation AI is doing a great job as well December 6th, 2018 Uber representatives presented a porvoid and porvoid is one of the incubating projects in Linux Foundation AI January 3rd, 2019 Acumos and Angel project updates these are graduated projects on April 25th, 2019 about a year ago the ML workflow committee was established that's one of the major working groups within LFAI 20th, 2019 a few months later Trusted AI committee was established and Animesh Singh my colleague from IDM who's one of the co-chairs of the Trusted AI committee will be presenting next on April 23rd, 2020 Red Hat presented about the open data HUD gave us an introduction and as you can see there's a lot of other talks that we have on June 18th, 2020 Montreal AI Ethics Institute presented about their organization and I should say that presentation which happened relatively recently there's a whole array of services there's webinars, there's courses there's all kinds of activities that the Montreal AI Ethics Institute has that now our LFAI members are also starting to participate in as well so as you can see this is kind of a network of networks of organizations and projects all around this theme of open source data and AI next I'd just like to say a little bit about who are the current voting members of the attack of the technical advisory council I've listed the organizations here and the names and again I will post this to my slide share because there's hyperlinks here that you may want to click and take a look at but the LFA LFAI Charter talks about the technical advisory council and its purpose is to facilitate communication and collaboration among the technical projects and also to build the community representatives of all of the premier members and graduated projects get a vote so this is an important point that I'd like to just mention so not only are the attack calls for presentations and growing the community but there's also work that gets done in a number of these cat calls votes on which projects to admit to the Linux Foundation AI as Ibrahim said, typically in an incubation mode votes on when a project is ready to be graduated and other activities that require voting so all of the premier members are able to assign a representative of a voting member as well as all of the graduated projects and you can see the names here the benefits of having a project in the Linux Foundation AI Ibrahim also went over this but I think it's important enough to to emphasize and repeat it's really this multi-vendor open governance of assets owned by a nonprofit foundation there are a lot of open source AI and data projects out there that are primarily controlled by a single vendor and that does introduce risk if other organizations want to build a product on top of that it still is a single vendor controlled so when a project comes into the Linux Foundation AI and it's under true multi-vendor open governance that takes a lot of the risk out competitive concerns and risks that an organization might have so again we would welcome you to these calls and we would also welcome your organization to become a member of the Linux Foundation AI and if you have a project if you're a developer working on a project we would welcome learning more about your projects and possibly having it join Linux Foundation AI to help build the community around the project moving along this is the call to action slide that I've prepared and I'll go through each of these briefly I talked to a lot of people every day about open source data and AI and as Ibrahim mentioned I typically start with the Linux Foundation AI landscape and I find out which category of open source data and AI are they interested in and then we drill into the specific projects in the landscape and find out what is their organization or what are they as an independent developer doing and as Ibrahim also mentioned this landscape is growing it's not static there's new projects being added with over 250 projects I think it's an enormous effort to keep the top projects being included in the landscape if you go to the landscape each project has a card and you can get additional information about it so I really urge you to if you are a developer if you are an enterprise developer and you haven't seen the Linux Foundation AI landscape please just go out to it, study it and get back to us and suggest additional projects if you see something that we're missing it's pretty comprehensive I have to say recently inside of IBM I showed it to one of our IBM fellows who's working on IBM's AI strategy and he was just blown away by it he said this is an awesome incredible service that Linux Foundation AI has provided to all enterprises to be able to see this comprehensive list of open source projects the next thing I would ask you to do is to study the Linux Foundation AI technical projects and use these projects and contribute to these projects again these projects are under multi-vendor open governments they are welcoming they have an established code of conduct for helping people who may have a lot of experience may not have a lot of experience but there's ways we can bring people into these communities to allow them to make contributions so that would be the second call to action the third call to action is since we have had these TAC meetings for a couple years now you might want to just go back and review the recordings of previous TAC meetings to get a sense of what happens in the meetings maybe look at a meeting where a project was voted on to become incubation or graduation or a new project was proposed or a new member presented about their organization go in and look at these recordings and after you do that you could suggest future topics for us to consider next join the weekly TAC calls Thursday 9am eastern time actually today we had cancelled the TAC call because we have this mini summit but I decided yes because it was on my calendar and somebody might not have had the news to go ahead and start it up so I started the zoom and six other people joined and they said yeah we knew it was cancelled but we were just double checking and we had a fantastic conversation about Acumose and one of the general members just launched a new AI project we had a conversation about that a person from Accenture in India who joined gave us some pointers on some stuff they're doing so again it's a very very welcoming community we have a lot of people who have a great interest in open source data in AI very welcoming so please consider joining these weekly bi-weekly shouldn't say bi-weekly TAC calls Thursday 9am introduce yourself into the chat and we also have an LFAI Slack workspace please feel free to join that final call to action request that your organization join LFAI and if your organization like many is expanding what it's doing in this AI and data space it's a great thing for your strategy your product development teams research teams to consider becoming a part of LFAI and just briefly I want to the other thing that we do on these TAC calls is there's legal notices it's a very very well run non-profit it's in my mind Linux foundation is the benchmark organization for proper behavior all of the antitrust issues and policies are there it's your enterprise legal organization would find Linux foundation AI is best of in terms of open governance of projects so please again consider becoming involved and now I would like to hand off to my colleague Animesh Singh to present about the today AI committee Animesh over to you thanks Jim I think both Rahim and Jim covered very nicely in terms of the landscape of the overall LFAI and the role of TAC in it and I think as one of the things which you probably saw on that LFAI landscape was essentially the projects which are grouped there they are grouped in certain areas and it was probably harder to look at from a screen perspective but one of the areas there is Trusted AI and I think this is something which all the enterprises all the organizations which are now implementing AI are grappling with right so we are in a phase with AI where essentially more and more models are being deployed in production and the enterprises now have the skillset, the technologies and the infrastructure to move more models in production now as these more models are moving in production what they are doing is they are also making critical decisions whether you are getting admitted to a university decisions like whether your resume actually gets to the hiring manager decisions in some cases like what should be your criminal decisions for an offense they play major role in life altering decisions and because so much responsibility is being entrusted to AI the other side of the coin which is essentially that you know AI itself should be responsible and it is making critical decisions there should be trust and transparency is becoming a critical team right so a lot of it is coming from within organizations who definitely want to do it the right way want to be on the right side of history but also a lot of is being enforced by you know different regulations within each of these industries where there are audit regulators etc who are looking now at models trying to understand models their predictions, individual predictions as well as predictions over a period of time to make sure what they are predicting can be trusted they are doing it responsibly and in an ethical way and I think when we look at the current time we are in where we are you know from the perspective of when we have a global pandemic going on then we have a lot of protests here for inequality related measures you will one quick google search will probably show you a lot of call now happening and more importantly that in these times having responsible trusted and ethical AI becomes all the more important things like you talk about hospitals for example running to capacity our patients getting admitted are they being treated in a fair manner a lot of these things which are happening at this current point in time there are a lot of unfortunately job losses happening and a lot of people are in the job market they will be reapplying ensuring that the resume is being accepted being sorted through these AI systems are being done unbiased and fair manner a lot of these use cases are coming through the forefront and we have to ensure that as we go through this we do it the right way and also as part of it we end up on the right side of the history now one of the things when we talk about ending up on the right side of the history I think just last month our CEO IBM Zervin Krishna he essentially sent a letter to the congress opting out of any facial recognition technology business in which IBM is involved in I think that also set a good precedent pretty much after that we had a lot of other major companies doing their own announcements on similar lines but it also beyond just the fact that they are opting out of that particular business there was much more details where Zervin actually involved a lot of IBM's history and how we have been making conscious efforts in terms of building products which can be trusted and responsible for ensuring throughout our legacy that what we are building can be ethical and trusted I mean if you probably follow this landscape one of the things which IBM is definitely known for is security and we ensure that that is one of the themes in a lot of the other work which we do as well another important factor to consider is that the impact is not only immediate it's not only tactical, it's not only within the current context if a lot of the human biases and we all have biases that's part of the human nature and obviously as a society we are all improving but if these things bake and get baked into the AI layers of today the bottom layers you are probably looking at the next generation AI systems for the next decade using those base layers which have bias etc baked into the base layers and then they are making decisions for the next generation for the next decade for example which are not being done in an ethical way and as I mentioned beyond the tactical impact of it the larger impact is that it can set the society as a whole generations behind as you can see there are a lot of voices which are coming in the community and if we don't do this ethically in inclusive way a lot of the gains for example what we made in civil rights and gender equity might be wiped out because AI systems are not doing or not giving predictions or not making decisions in the right way now one of the things we at IBM did early on I would say around a couple of years ago this is something which we started doing more but even before that IBM research invested before the trusted and responsible AI became a major thing around three years ago IBM research started investing a lot of effort in terms of creating technologies filing research papers, filing patents how can we ensure that the AI systems which we are building can be done in a more trusted and responsible way and as part of that effort there were four principles which were identified we want AI systems to be robust because again carrying forward the legacy of security which we have that was a primary theme that thou shall not be able to fool AI systems so they should be not vulnerable to adversarial attacks because models are getting deployed a lot of inputs are coming in and even though those inputs can be modified and those modifications are not perceptible to a human AI systems can be fooled to give vastly different prediction than what they are supposed to give and there are a lot of cases which we have seen in the robotics industry in the financial industry where adversarily generated inputs which are being sent to AI models they are actually forcing models to make predictions which are vastly different than what they were supposed to do so one of the tools which we launched can anybody tamper with it with that notion we call adversarial robustness 360 that essentially focuses on this particular part of the landscape which is it allows you to analyze your existing models both deep learning as well as machine learning models across TensorFlow, PyTorps, Scikit-learn, the next thing the different frameworks you use to analyze whether they are vulnerable to adversarial attacks and if it does find that they are vulnerable to adversarial attacks it gives you algorithms to defend against those attacks which you can implement in your AI models so pretty strong tool in that side we also got funding from DARPA which is the US defense research organization to actually co-create this along with them as we move forward so they are investing quite a bit in terms of taking some of the technologies being built as part of this project into the defense research mechanisms the second tool and I think the topic which is very very topical is around fairness are your models biased are your data sets biased if they are biased how can we measure bias how can we detect bias if we can detect bias then how can we mitigate bias so you want to look at it from the whole lifecycle perspective starting from data processing so you want to be able to point to your data sets and even starting from that perspective is my data set biased itself the distribution of the different features and the samples is a particular feature over represented or under represented because a lot of the real world data sets you will get for example let's take an example of a criminal justice system data set you will probably see it's very biased because a lot of the data there is against a particular race and because that's historical data the models which are learning over that data they are also showing more inclination to be biased towards that so those are the kind of things this tool actually allows you to detect so not only in your data set but also what we call in processing in your algorithms for the models and also what we call post processing which is essentially once your models are deployed and giving predictions you should be able to use this tool to actually collect those predictions and look at it from a transaction perspective or over a period of time is the model giving bias predictions what we call AI fairness 360 the GitHub links as well as the demo websites are listed there and I think one which is easily understandable and probably the most widely at this point being used is explainability now a lot of the frameworks like TensorFlow et cetera they are coming up with their own explainability modules as well what this tool does is also gives you explainability from a much wider perspective now one thing that you do notice 360 in all these names and that's there for a reason so we're not looking at just for example your model is giving predictions can you explain that prediction so that's definitely a class of explanations which is very important you denied my loan you denied my admission explain why you did that but I think what is also important is generating explanation on your data sets and samples generating explanations overall the life cycle of your model predictions a lot of these models which you get are probably black box models they either have deep learning neural network code which is not easily decipherable so how you can actually have explanations generated for those black box models for example there are algorithms where you can have a surrogate model which is looking at the explanations you know transaction by transaction basis learning and creating another model which is for generating explanations for your original model so a lot of these tools and algorithms are within the I explainability 360 one of the things which is also there is an integration with line and chap which are again very popular open source explainability toolkits so both of them are integrated and exposed as part of the explainability 360 as well and apart from quite a bit of algorithms which IBM research has built into it which look at the whole landscape from data all the way to your deployed models and last but not the least I think another project which you know we have launched quite a bit of there are quite a bit of research papers there is a website and you know this is something which we don't have the code yet but this is around you know lineage and accountability so essentially having the notion of creating fact sheets so when you're deploying your model there is a fact sheet which essentially traces back unless in very much detail what ingredients were used for example to make this model what was the data set how was it changed how were the features and the samples manipulated if anything what version of a framework you use for training what hyperparameters you use for training when you actually tested what was the testing data distribution set so you can trace back the whole lineage of this and that's where you know fact sheets play a role now one of the things which we did in the context of you know LFAI was also you know joining hands and forming this trusted AI committee because you know I think we definitely realized and you know Linux Foundation also realizes and IBM has been you know very very keen on ensuring that you know this is an effort which LFAI launches from the ground and ensures that you know beyond the core there are a lot of things which going to you know policies regulations social science etc which play a role in creating the next generation AI systems right which are unbiased ethical responsible transparent and that cannot be handled by code itself right I think definitely you need to have code and a lot of the on-ground practical things will happen with code and that's where you know the open source projects play a role but the second aspect is also looking at the general industry use cases right what is what are the use cases for example in the financial industry or the healthcare industry or in the telco industry and then also looking at you know the technologies back in those use cases now the other aspect of is you know the principles right if I as an organization want to declare that you know whatever AI systems and products and models I am building and deploying in production they are trusted and they can be trusted they are responsible what are the principles guiding us right so as I mentioned in the beginning there are four principles which I've been defined for ourselves so one of the activities right which this committee also drives is essentially you know coming up with the point of view of inclusive LFAI principles which is a representation of the companies which are participating in it you know so the likes of you know Tencent Orange AT&T for example you know Montreal AI Institute of Ethical AI Ethical ML Institute so a lot of the organizations you know which are participating in it and ensuring and creating a collective inclusion set of principles right which can guide the overall technology landscape and use cases landscape as we move forward with it right so I am as I mentioned right I am one of the co-chairs for North America we have Bob Wally she is from Orange right she is representing the Europe side and then we have Jeff Call who is representing Asia and he works for Tencent Organization and I think one of the things right which we also did and probably you know one of the first activities which the Trusted AI Committee sort of helped the LFAI landscape was you know looking at the projects which are playing into this space right what are the projects into the explainability space what are the projects in the bias and fairness space you know what are the projects which are you know playing into the adversarial space and enlisting them there right so for example if a lot of you are using Tencent Flow you are probably aware of Tencent Flow Cleverhans right which plays into the space of adversarial AI so that you know you can essentially detect whether your models are vulnerable to adversarial attacks now as I mentioned right the one of the focus on our side has been to actually go very broad in terms of both the life cycle of AI but also the number of frameworks we can support right so Tencent Flow, PyTorch, Scikit-learn, XGBoost but there are a lot of these projects which are already there if you look at the explainability space there is you know University of Washington Line and SHAP right there they are pretty popular heavily used and you do get a lot of heavy you know usefulness out of it when you are actually looking at contextual predictions for local transactions right so again I mean you know this landscape is right and a lot of the worthy projects exist so we definitely invite you and want you know if you are building the next generation AI systems and now your organization is at a point where you are you know either rolling them in production or you probably have deployed already you know a few of these models in production right you want to ensure that you are using these tools and technologies to make sure that you know what you are deploying is doing the work is doing in a responsible fashion. The other thing right in general so some of the samples of the activities right which we have been doing into this work group right so for example when we looked at these tools right AI Fairness 360 or robustness one of the things we did notice was that you know this is toolkit right which has a lot of Python libraries which is very popular with you know the data scientists who typically interact with Python right but there is a huge landscape of developers who are you know building platforms right so essentially you are building platforms for your end users to use it and to be consumed inside platforms so that these can be offered as a service you need more than you know Python library and for that you know a lot of the work efforts we are driving has been around a section which we call MLOps right so a couple of things which are happening there right we are creating integration with some of the pipelines right which is sort of becoming you know one of the defining technologies for handling MLOps so if you are aware of Apache NIFI right very popular project into the data and AI pipeline space actually most project right which is part of LFA uses that so we created for example a NIFI processor for AI5 NS360 so if as part of your life cycle you need bias detection and mitigation capabilities you can use AI5 NS360 NIFI processor similarly a lot of the open source community is using Q4 and Q4 pipelines they have become very popular right so another thing which we integrated there for example was creating Q4 pipeline components for then you know there are a lot of other presentations which have been happening around AI fact sheets or you know how we are integrating with you know model serving platform like KF serving we have other members like KPMG who are actually doing this in practice in the field with clients and customers a lot of the government agencies and cities as well as financial customers right so they came and gave a presentation and left a few more details in next of the slides so a lot of these things are coming from the principles working group there is a lot of effort as I mentioned right to get together you know with the participating companies right the likes of Orange, Tencent AD&T etc. IBM coming together and you know the rest of the participating companies like Ethical ML Institute and creating a set of guiding principles right which can guide the next generation of technologies they will put behind to ensure that you know all those principles are getting address into the AI systems they are building right so as I mentioned you know some of the activities in the technical working group right so one of the presentations which we had for example was around you know how to use the Apache AI Fairness 360 pre-processor and we also had you know presentations around how you can leverage fairness and robustness into your QFOA pipelines if you are leveraging them then you know there have been presentations around the work some of the work which other parts of IBM has been making AI Fairness 360 compatible with the scikit-learn community or with the R so there was a version of AI Fairness 360 which was launched for R users to be used within that particular community and definitely you know very recent one around AI factions now one of the things which I mentioned was for a lot of these techniques to work right if you are not looking at you know just individual model predictions right so for example okay your model gives a response you can explain that within the context of that particular response your model gives a result and then you can you know there is probably you can use some trivial algorithms you know which we enable as well right to detect within the context of that but what you want essentially is over a period of time right you want to be able to have a capability to actually collect the payload right so the model request and responses over a period of time and unless you do that you would not be able to you know leverage some of the more advanced algorithms which can then define as a whole you know over the course of last six months and thousand predictions your model has been biased it has been you know vulnerable to adversarial attacks there has been outliers there have been drifts so if you want to be able to monitor all those capabilities to ensure that you know your AI system is working in a responsible and trusted manner you need to collect those logs right so one of the technologies you know as part of the work which we have been doing is essentially ensuring that we are creating for example you know payload logging so that we can collect payload logs when your models are deployed in prediction and for this right now we are using you know a project in Q-Flow called KF serving so essentially the idea is that you know as your models are running in production and requests are coming for predictions right we are collecting those payload logs over a period of time we actually use you know standardized training using cloud events to enable that if you need more details you know definitely then I talked about you know the principles working group right so the team has come up with you know set of seven principles which are being used to guide this at this particular point in time equitability reproducibility transparency governance privacy security and accountability there is you know quite a bit of details behind it but the goal here is that as we go through this and mature and we will publish you know an LFAI white paper right what does the LFAI participating set of companies think are the set of principles you should follow but also implemented within LFAI so not only you know a paper exercise but we want to ensure that you know the projects which will be going through different phases in the LFAI life cycle so you know your project is getting accepted it's being put in a incubation mode it's being graduated we want to ensure that you know there are some processes in place which can also give you know badges of honor and certifications that yes your project a project is following these principles right and and it can be certified or you know there is some sort of mechanism right which we integrate into that right so those are the things which are being worked on as part of the principles working group so I talked about you know KPMG right so they have developed for example if somebody is looking at you know how to do it more practically in the field so they have developed their own framework called you know AI in control where they essentially go through you know these different phases strategy design model evaluate deploy in the wall and they bring a set of tool kits right so they bring you know a lot of these open source projects which I talked about they also use for example you know what's an open scale as the technologies but there is a bag of tool kits which essentially goes in the auditability process right so a lot of the companies which want to run an audit check a governance check before they actually deploy models in production right they will leverage for example a service like this right and then ensure that you know the models are cleared after that auditability and governance process now some of the use cases for example you know they have been seeing is you know spread across industries as I was mentioning right you know banks credit card companies for example you know Amsterdam and Netherlands right is one of the cities they are working with to ensure that the complaint system right which is essentially open for the residents of that particular city it's not being biased in terms of addressing complaints of prioritizing them right there are you know even use cases which they are handling in the travel agency because a lot of the times right you have buildings etc going on where you know a lot of the bots are getting involved and they are actually you know ensuring that you know they are taking away the right set of capabilities from you know owners or you know individual people who should be you know bidding on those or you know buying those so a lot of these scans are going on so a lot of the use cases if you see they are addressing the spread across industries right in the brewery company in European capital city travel agency banks so and then use cases are emerging like one of the use cases we are working heavily for example is with you know one of the major retailers in United States right which is using some of these technologies to ensure that you know they are hiding practices etc right alright so a lot of the work you know being done in the industry around this now what sort of presentations like for example participating member companies are presenting so orange for example did a presentation for responsible AI and how they are approaching responsible AI within orange how they are you know defining the set of principles and guidelines and one of the things which you know is also guiding them as the European Union you know so European Union has published a white paper right where they are essentially listing down the European Union's view of what a trusted and ethical AI should be and there is a lot of chatter right that just like GDPR right which essentially laid down some ground rules and laws and guidelines where systems have to be compliant to GDPR right if you are holding you know sensitive data you need to pass that compliance there is you know possibility right and discussions right which keep on happening that you know similar regulations will be coming around AI right to ensure that you know if you are using AI models you know they can be trusted they can they are responsible there is transparency and how do you prove that they are doing that now one of the things right which I talked about was right essentially you know we have quite a bit of open source projects in this space right which we leverage both within the trusted AI committee right but also IBM you know leverage is quite a bit of the advanced algorithms from these projects and some of the products which we use with our financial and industry and banking partners but just having code into open source is not enough right I think both Ibrahim and Jim stressed on it right that a lot of the projects there are they are in open source right but they are typically you know a lot of cases run by a single individual or controlled by a single vendor and probably you know closed in their business right they are at times you know delaying or not allowing outside contributions or in some cases right you know outside leadership to actually you know gain prominence within those communities right so I think they pose a lot of a greater risk and lower that opportunity for collaboration and innovation right so that's where you know IBM has diligently followed that model that with the projects which we work in open source whether it's our own projects or the projects which are coming from other major organizations we want to ensure as much as possible that they are in an open governance model right so which means you know they are in a foundation in a neutral place so that means you know there is the neutral organization which is holding the copyrights and associated trademarks right so that reduces the risk of project abandonment right fosters a lot of collaboration eliminates that single vendor control and risk associated with that right and gives a very real sense of ownership for all the collaborating members so I think since this is a principle which we at IBM follow and we want other projects to follow it's you know all the more makes sense that we do it ourselves right so as part of that you know our VP of open source taught more just this week at open source North America right he announced that we are moving all these projects into LFAI in an open governance and neutral model right and part of this is to ensure that you know all the things which we talked about what being in a neutral place brings about and we want to make sure that given the current times right the unprecedented times we are in and the heightened need for ensuring that you know the AI systems are being built in a trusted responsible and reliable manner these projects should be you know leveraged by the community they should be owned by the community and collaboratively are developed by the community right so that's essentially you know the announcement which we did that we will be you know moving these projects into an open governance model to LFAI the LFAI attack couple of weeks ago right what it for these projects to be moved in so we are working through the processes to ensure that you know they will be getting onboarded within the LFAI community with that I come to the end of my presentation right so I do want to walk you have you walk back through 5 key things a we have you know projects which are now part of LFAI right which are into this space and addressing it from multiple perspectives right from security and vulnerability and adversarial attacks from model explainability to you know projects which are looking at you know bias and ethics problems in your models detecting mitigating giving you a lot of tools and technologies for that we also beyond the code right as I mentioned right we cannot just solve it by code and beyond the code there is a lot more which needs to happen right along with the code and that's where you know the LFAI trust study committee largely plays in right so we definitely do a lot of technical and hands-on use cases but we also ensure that you know there is this larger view which is being taken into from an regulatory from social science and coming together as a collective view of the participating companies and the set of principles being defined and the last thing right I mean if you are interested into this research area this is a very active research area right a lot of papers being published IBM research has done an excellent job of putting up a website where they are you know publishing the latest set of papers except for which we they are putting out in the space so you know if you're interested in that space I would encourage you to look at that with that I will pass on to Jim to talk about you know the next which is the ML interop committee over to you Jim. Great thank you Animesh and I just want to thank Animesh for going deep into the trusted AI committee as I mentioned there's also another committee the ML workflow and interop committee Howard Heim from Huawei is the chair of this particular committee Linux foundation AI committee unfortunately he suffered bandwidth issues this morning and wasn't able to join I have to empathize because in these pandemic times if our internet goes down it creates chaos and I remember two days when my family and I had no internet and it was tough going for a while so Howard runs a monthly committee meeting and you're welcome to join that I would just mention again that the TAC committee calls are every Thursday at 9am eastern time Animesh those are every other week for the TAC calls the alternate weeks that we don't have all Animesh runs a trusted AI committee call I think one hour later at 10am eastern time Howard runs calls as well once a month for the ML workflow and what this committee does is as you recall in the landscape there's a huge number of open projects and ensuring that they interoperate and work together is the focus of this committee so I invite you to join this committee as well if you have an interest this basically I'll go through this pretty quickly to leave some time for Q&A I want to try to leave at least 15 minutes so I'll try to go through these slides that Howard prepared for his ML workflow and interop committee in about five minutes but the ML workflow and interop committee is addressing all these interop issues the slide here that just talks about a recent development of the IRO project which became incubation level in Linux foundation AI and Julia and MindScore there's work to get these working together and I'll just move on quickly because I want to cover a lot of ground this is the scope of the ML workflow and interop committee and Howard likes to use the term northbound interoperability and southbound interoperability and northbound is when AI native programming frameworks are adopted for an application in different areas so the northbound is kind of you can think of it as the application level riding on top of many of the open source projects southbound interoperability when the AI native programming frameworks was used in various compute and storage back in so that's the infrastructure level Howard is very interested in ML workflows and interoperability both northbound at the use case application level and at the southbound infrastructure level we look at the various frameworks for interoperability onyx is a graduated project of the Linux foundation AI and has a semantic graph representation both for tensile flow, high torch, scikit learn, lots of other ML and DL frameworks can be converted into onyx and then once it's in onyx of course it can be optimized for different hardware it can run on different platforms and many different tools use onyx as the tool for interoperability and for example mathworks and onyx as a graduated project also has community meetings open to the public the last one I think there were 200 people registered 100 different organizations representing 100 different organizations so and again another aspect of the problem scope is building interoperability when various types of deployment could be reproducible on different pipelines of course this is the Kubeflow project is not part of Linux foundation AI but it is on our Linux foundation AI landscape and many of the members of Linux foundation AI including Animesh Singh are committers on the Kubeflow project so we have a lot of connections with projects that are in the in the in the LFAI landscape and I do encourage you if you are listening and participating in the call today you can put questions in the chat and we'll try to get to them moving right along the next thing beyond the scope that we wanted to briefly touch on this is a diagram that shows a little bit more about what's meant by the northbound application areas you can see computer vision as an application area natural language processing reinforcement learning lots of different application areas on the AI native programming and then down below in the southbound you can see the interop on various types of hardware clouds, edge devices of that nature so slide here talks a little bit about the committee the we have a standard set of questions we ask, we try to identify the gaps that exist to make projects interoperate we sometimes invite projects to these monthly calls and ask which projects are you able to interoperate with well and why and which projects are you not able to interoperate well with and why and then we go through an exercise on each project getting a cross community discussions going, looking at what the gaps are trying to figure out how do we create an interop specification and then develop joint groups of concept both where we see opportunities to connect projects as well as the difficulties in connecting projects I should just briefly mention that Oprah who was the previous chat chair person was the person who helped launch this ML workflow was the chair before Howard the ML workflow interop committee update, these are some of the key developments and key stats, again I want to leave a little bit of time for Q&A hopefully or just discussion between Ibrahim and Amish and I to reinforce any messages that the three of us might want to reinforce so I'll skip through these pretty quickly but there's lots of ways to get involved and please reach out to Howard or any of us if you'd like to get involved we have been going through the various projects like Adlich, Selden Hubflow, Onyx talking about these five different types of questions that we try to address for each project this is some of the outcomes from the Adlich discussion, again I'm not going to go through these in detail, we do publish all of these minutes from the community calls so all of this is available online a lot of it's in the Slack channels that we have and with that I think we can open it up for discussion with Ibrahim and Amish and myself and if anyone with questions would like to submit them we will try to answer those questions and if not I've got some questions for Ibrahim and Amish and we could have a good discussion just off my questions so great I'll ask Ibrahim a question while we get started but Ibrahim why do could you just go over again why organizations find it useful to be part of Linux Foundation AI? Thank you Jim I was not expecting the questions to come from you so that's good though there are really a number of motivational pursuits for companies why they would want to become part of the LFAI there are companies that would join to support the foundation because they believe in the efforts we're doing in terms of building and enabling that open source AI ecosystem and would like to funding to enable our efforts whether it's legal efforts supporting infrastructure for projects marketing efforts for projects and bringing communities together and so on and also there are projects that join us to help their projects by bringing their project to the foundation becoming members basically to have a seat on the table in that sense and help grow their project within the foundation although it's not not a requirement but it will always love to see a company hosting projects become a number as well and there are kind of a third type of companies or organizations that join from a kind of pure R&D perspective these are a lot of the associate members that are universities, research centers and other non-profits who have some overlapping interest with a lot of their efforts we do and they see it best to become a member so that they can participate more actively in the different committees and efforts that we have and of course being on the mailing list and having access to free access to the event and so on so I think every company is different but basically the common theme is participation and collaboration under a neutral foundation great thanks Ibrahim and I do want to remind people that there is a Slack channel I think it's called number sign2-track. AI-ML-DL so track AI machine learning deep learning number 2 where that Slack channel is open for questions and discussions as well I'll just remind people of that question for Animesh while we wait for a question Animesh, you had a slide in there about open governance I know part of the reason IBM is interested in this space is all our products are based on open source I think 90% of the capabilities in our products are based on open source and I know IBM considers it important not just to use the open source but give back could you say a little bit more about what you see is the value of open governance briefly? definitely I think they are right it works at so many levels open source projects by virtue of it if you look at the really successful open source projects they grow because there are volunteers there is a community which is organically formed obviously there are organizational investments which strategically drive but any open source project which you look at which has become hugely popular like Kubernetes etc over a period of time the organizational interests are surpassed by a lot of volunteers who come in and form this huge community to contribute and grow now if you look at from their perspective and we have seen this again and again a lot of our open source contributors and developers they tend to stay back and stop contributing to certain projects because the governance is not right where essentially the amount of efforts they are investing and obviously it's evident to everyone else around them people who are contributing to that project that they should have a leadership position there or they should have more rights in terms of commitment maintenance they are not getting that and because there are some vested organizational interests which want to control the project in a particular way where the voting structure, the ownership is queued so you tend to lose a lot of those organic developers very good developers and the communities start getting stifled people get frustrated they go from a project they just in some cases leave the organizations because if the organizational mandate is you continue contributing but they are not seeing that personal gain coming through that so I think that's one of the key reasons if you want your project to be truly successful and organically being driven by developers who are coming who are excited but also you know the projects are ensuring that you are getting efforts and you are getting recognized for the efforts you are putting and it's very critical to have the right governance and one thing which totally makes this possible is having the project in an open governance with a neutral IP with a foundation we have seen the huge success with the projects which are in Apache or the projects which are in Linux foundation anybody who follows the CNCF landscape we see the huge proliferation of the community which happened once Kubernetes moved into CNCF and made Kubernetes what it is today so I think that's why this is something which is very important but beyond that even as an organization if I am looking at beyond the individual developer perspective if I as an organization I am going to invest, create a roadmap for my products to be based on these open source projects I need to ensure that there are no legal concerns around IP there are no legal concerns around trademarks you know suddenly someone cannot change the license of the project unilaterally cannot change the name or the logos and the trademarks these are the things which people who are investing and productizing and now deploying it in customer enterprises want to ensure are robust and sound so that again project being in an open governance model in a foundation really really helps great thanks okay so I think that brings us to the formal end I don't know Ibrahim or Animesh if you have any last words or anything else no thanks so I think sorry go ahead Ibrahim thank you the last word I would like to invite people attending this seminar this mini mini summit to really visit LFAI.Foundation website we expect people to visit us just because they are interested and would like to learn more and I think to kind of go a little further on that to consider two things, number one joining us as a member as you mentioned Jim earlier it's really hard to find a company that is not interested in AI and specifically open source AI and we are working as a vehicle to accelerate innovation in the space and number two I would recommend them to visit us and read our hosting project with us and success stories about hosting with us and growing the communities and graduating and so on and all of that information is available on the website and I'm also available for one-on-one discussions, people can reach me on my email address my first name at tinnixfoundation.org we can book time and discuss specifically any questions or inquiries about membership or project thank you all of you Jim no I think we are at the end so thank you again everyone for participating I think we will call it a wrap and thanks to the support staff as well for helping us show on much appreciated