 Good morning. Good afternoon and good evening folks. Who's on the calls? And this is Natarajan Subramanian. I go by Nat working in TechMahindra as a head of enterprise architecture and digital and AI and also LFA governing board member and Akumar's project TSE chair Today, we are going to basically just seeing about and talk about LFA AI. What is LFA AI and then also Akumar's AI project? Let me go over the next one and yes, of course, this is just a basic introduction of me and LFA project and LFA foundation. It's basically founded pretty close to three Yeah, two and a half years ago 2018 March and it basically founded by the members and predominantly focused on the AI space, machine learning space and collaboration to create a harmonized AI ML space for your open source community. That's where we actually promoted and originally it was formed. It was called as LFA deep learning foundation and Somewhere around last year May the name has been changed into LFA AI foundation and that's where we are right now and there are many different projects among the under the umbrella of LFA AI foundation and it's a just a governance overview. It's a single funding effort to support under the LFA foundation, Linux foundation and every project has its own technical steering committees and they basically manage and have their own governing councils and operate it and The structure is basically defined as a kind of graduated project and incubation project as similar to the any other foundations under the umbrella of Linux foundation. Okay, and So the next one is What is LFA AI mission? The mission is to basically build and support an open AI source community and innovation and collaboration between the open source community. I know there are many organization commercial entities basically involved and developing on AI and normal solutions. However this is this is a one area where we can Communicate and work as a kind of open source community and give enough to the open source community and Have a collaborative area. That's a keen interest and that is what one of the main motto for LFA AI mission. Okay, and Just on a high level how that LFA AI is structured and governed and if you look at it, of course, there is a governing board Every every premier member will get a voting membership and seat over there. And then there is a one General member will get the governing board for every 10 General membership. So right now we have around some 10 plus general membership And so there's a one general member is basically sitting on the governing board. In addition to that, we have Technical steering committee of individual projects and overall for the LFA AI There's an advisory council technical advisory council and outreach committee which is focused on marketing efforts and then Promoting that LFA AI and other projects. In addition to that, there are many other subcommittees also we can basically see that subcommittees in in a bit and The hosted projects and if you look at it, there are more than 10 plus projects at this moment and There are three major graduated projects in this LFA umbrella Akumos is one of them and the whole entire LFAF foundation is also formed with an Akumos That was the first project graduated project and then other one is Onyx Which is Microsoft and Facebook and many many other Organization is involved in it and that's another graduated project and Similarly angel, which is also a product came out from Tencent and some of the companies So these are the three graduated projects under the umbrella of LFA AI Each has its own unique value on the AML space in addition to that There are so many other incubation projects, which is under the umbrella like in a horror word Adlyk Spikler and and then it's move on and in fact quite recently we worked with the one big name Organization and they are also donated Basically joined LFA AI and moved their internal project into LFA umbrella I'm just holding that thought an announcement for the public announcement and until the LFA announced it So that's also basically under the umbrella So what does it mean is it's constantly growing and LFA AI is basically constantly growing in terms of membership And as well as in terms of the projects, so that's that's what I just wanted to mention it and then next one is You will see the the key critical And then members of the organization like I said who are all The premier members are around some like a nine of them at this moment ATT Baidu Ericsson Huawei Nokia Tech Mahindra. I'm representing Tech Mahindra here and ZTE and and then Tencent and Zeles and these are the like in a high-level Premier membership and then there are many other members on the general category including IBM Red Hat Orange and Xenon stack and I win stack Gemini cloud in addition to that There is an membership category called associate membership where Yeah, non-for-profit organization and as well as in the academic institution also being part of it And if you look at it, there are many trusted and ethical institutions and or basically park part of it and Unicities like NJIT and why you they are also becoming a member and They're actively involved in it on code contribution on the the projects and actually involved in many of the project discussion and technical discussion And this and discussion. So that's that's another another avenue of Basically taking this LFA I to more reach Okay, and We will see that Some of the Once again, I'm going to I just mentioned about some of the sub-level working committees Which is basically very narrowly focused on the technical aspects of the The projects which is under the umbrella of LFA and one of the key subcommittee is ML workflow committee And now it is basically not only ML workflow. It also talk about interoperability What this committee is basically doing is We basically try to lay out a ML workflow in terms of the three category Of course, when you when you talk about any ML space you you need to focus on the data governance and you need to basically manage the operational model creation and How it is basically work on it and then yes, you have the data product and model is there But how how are we going to utilize it? That's what the roll out of serving pipeline comes into play So these are the three areas of the top level top stream line and then each one is basically Alligned with many different small pockets. So this particular group committee. What is work is? apart from this laying out and streamline and and It also look at the perspective of all the LFA umbrella projects and where each project fits into this category and That is a first step and see how we can basically work interoperability for example Akumos have certain needs of the serving pipeline rather than reinventing the wheel on the Akumos side and see whether any of the other projects in the LFA umbrella have a Serving pipeline which we can liberate. So that that that's a small example, which I'm laying it out So that is a one step and then beyond that the next step is okay if they are not what if if they are not part of the LFA umbrella and if they are under general LFA what do you call AML space and Still an open source So we can basically take it them to the next level and see whether we can basically work with them and then bring into that So if you look at the the left pane of the the slide, there are many participants. They are not actually LFA members, but they are very actively participating on this Discussions and then see how this can be basically taken to a next step. So that's what we are working on with that and And then similarly there isn't one other key committee, which is a trusted a committee I know AML space mission learning space It's always a concern and and and every every community as well as in a public public mindset is Whether we can basically trust that artificial intelligence and mission learning algorithm so so LFA is not wanted to get into the Approach of basically ratifying or certifying whether that algorithm is basically Passing all the criteria or not. However, we wanted to basically Certainly get involved in a general principle where This needs to be in a guidance level and working principle level. So that is where This committee was formed and this is basically looking at the product and tool sets and projects which can basically help Whether it's a biasterection or our robustness or vulnerability check or what not our fairness So all of this is basically comes into that as an I can working group committee So this evolves and then define the policies and guidelines and then and then share it with and then look around and Communicate with and in this area, there are some couple of projects also going to become joined at LFA I am but a pretty soon. So that's that's a trusted a committee and and and if you look at it There are many organizations basically getting on and involved in it and defining the principle and the use cases and use cases This is basically a working committee which works on specific use cases and tools Yeah And and and I know I'm going on On a slide by slide and I'm going on I I have a time at the end of the presentation and enough for question and answers So I'll keep it that And I don't want to basically just disturb the flow and if you have any questions You can post it on post it on the q&a with the speaker I will certainly look into it at the end of the session and then I'll try to answer Whatever the possible if I have if not then I can certainly reach out to you in offline or you can communicate with me offline. Okay Next slide is basically give a overview of What is the LFA landscape? I know it's a it's certainly an eye chart and if you look at it there are More than 200 plus projects in in an open-source community space predominantly serving that AI and ML space Starting from many different criteria So this is one of the LFA is a effort the technical committee and the technical advisory committee and as well as the internal staff team of the LFA I they work together and put across this landscape And it it's not only LFA projects mentioned here It also the projects which is across the open source AML space And the bigger icons like an angel or milvers and akumos Those bigger icons are basically LFA I hosted projects Other ones are basically an open source projects not still Under the umbrella of LFA I but it is an open source project So this this will basically give a guidance and and if you look at it so many contributions and so much of Commitments commits across this project Okay so now I just gave a overview of LFA I and LFA I landscape in a blueprint And then I just wanted to basically take into the next step of very specific and narrow to that akumos project How this is conceptually initiated and how it is so back in 2017 Tech mind and AT&T at the beginning of the time We were looking at the perspective of okay. The a space is basically phenomenally getting growth and then it's a new digital age, right? It's like a moving on whether it is like an individual companies which is involved in a high tech manufacturing or energy and utilities or banking and financial or insurance or health care Every companies is talking about artificial intelligence and machine learning And if you look at it and based on the Mackenzie report on 2017 There are some 26 to 40 billion dollar investment on the artificial intelligence by every company. I know it is a little bit of a three or four year old chart But if I look at it, I'm guessing on 2020. This is much more higher Everybody is basically looking and seeing a value in it And then we were looking at a perspective how we can basically work on and bring a commonality And having a platform an artificial intelligence platform for a common use That's how that idea was basically initially conceptually originated and AT&T and TechMindra started working collaboratively Basically work on an artificial platform intelligence platform where the models can be basically Onboarded and then basically available as a microservice so that it can be Deployed and served for the necessary needs And then the when the model is onboarded it is already pre-trained And come out there like an assumption of a predictive capability. That's what we meant. Okay. So that's how we started and and then and It's not just a telecom industry the TechMindra and AT&T is a tech man Like in a telecom, but it's not basically the fraying of this particular platform being utilized only for the telecom. It is basically having Option to adopt it by every industry in every business domain in in across the in the globe Whether it's a security space or whether it's like a healthcare life sensors or financials Or our utility company energy generation Or whatnot you you can name it every company has an opportunity to basically leverage this platform That's what this highlights is about And and and then as we go further down and this is very specific to Akamos AI and and we will go on this diagram of architecture diagram High-level platform diagram in the next few slides, but before that I just wanted to give a high-level overview That how much of commits we have and how much of people basically contributing so far and so seven k plus commits and then And 100 100,000. Uh, sorry 100 100 plus commit contributors or on the on the Contributing and like I said This is open source project and it is licensed and by Apache 2.0 And so that it will be basically it can be adopted and utilized by any organization if they wanted to and individuals also Okay, and then uh, next we are talking about the little bit of a highlights and how are we in the past year, I know, um after we the project is When we started working with the collaborative with the AT&T and TechMendra to 2017 and in 2018 the seed code was launched and then along with the LFA LF deep learning foundation um formation on march 2018 And since then there are four releases has been, uh, basically out um, so 2018 End of November the first release was launched, which is called etina and then 2019 there were two major releases one is the Boreas and Clio we kind of like on a following the greek cock a greek god mythology called name analogy almond pleasure and then just about, uh, Not just about a month ago. We launched a demeter um, which is um Launched and then we are currently working on the demeter point release and point release 2 And then for the 2021 for the next year the major release will be codenamed as alpis And then there will be a subsequent point releases Like an updates and everything I know in the past since the etina to clio and demeter We worked in the modus operandi of like an releasing a major release every six months And then there is a one point release after that But uh, we saw that and evaluated and as we getting slowly mature Instead of basically releasing every six months of the list We can go with an approach of basically a one major release a year And then two point releases like an every quarter that way we can basically have a breathing time to accommodate Any of the bug fixes or defects raised by the community? And then any small enhancements which we can do and major major releases will have certainly a new features So that's that's a uh a hierarchy and that is an achievement in the last I would say in the two years since we launched this Akamos project along with the lf ai And next we are we are going to see the the presentation on the The quadrant major quadrant and it looks like Okay, let me see that whether I can Go into a okay Okay, now everybody sees a full quadrant. I believe Uh, the the first quadrant is basically, um, how are we creating an onboarding a model? And many different tool kits like a scikit-learn tensorflow h2o hard cloud and c++ So those are the tool kits which we its akamos platform is currently supporting So the the model can be onboarded and and pre-trained model and that can be basically Commerce here and then converted into microservice For a specific deployment needs or even the model can be just onboarded Assets and there is an option to basically onboard a pre-dockerized model also If you don't want to basically convert that Leveraging the microservices engine on the akamos platform So those are the options which you basically bring in and on board the model And then in in addition to that you have an option to basically train the model Not within the platform We are basically slowly working on enabling that feature However, you can basically take the model the pre-dockerized model or or microservices model and then train the model and then basically bring back And on board the model and that's what it is basically Given look in a for option And then you basically once you have the models and everything is available. You basically go for a The final option of basically publish this model on the marketplace like an app store So when you publish it to the marketplace Who the users community can come and see that what are the models which is available and what is it and each model is also Given an option to basically Go with a searchable catalog Domain specific. So that that's a feature which is available on the marketplace or Publishments and you can also share when you publish You don't want to publish the model to a public place But if you wanted to basically share it with a small group of people where you wanted to publish and then basically Evaluate that also possible. You can basically take the model and share it And then you can basically work with your peers and to evaluate and then once everything everything is zionized Then you can basically take it to the public That is also possible. And then lastly Which is basically an execution where you can basically Generate and execute the model and deploy it. So that's that's a the fourth quadrant Which is basically looking at and target environment is basically Since it's a Docker image, you can basically deploy it in and wherever you can have a Docker Docker capability of running engine Docker engine or you can basically deploy through the Kubernetes Or your own internal Private on a hybrid cloud or public clouds like an assure or aws whatnot So through the platform or you have an option to basically download the microservice And then take it to that your internal parameter and then deploy it in any way you want to So that's that's an option. Which is so The the Akamos platform is trying to basically give a full life cycle for a AAML algorithm. That's what it is Okay, and let us go about How we evolved From the the the four releases are like an absolve on the Athena It's predominantly focused on Enhancing the model onboarding and model deployment. That's where it was basically predominantly focused on And which is basically a model model centric And then on the various release it it started basically, okay The model is onboarded But it is pre-trained and how we can basically allow the model to it came to be Pre-trained and given an opportunity how we wanted to do that that is where that that that area was focused on enhanced And okay, this two element is covered But there is a one important element which we kind of like in a mist out which is of course the data So without any data Affiliation or or alignment the model cannot basically give any kind of a value proposition to the end users community So that's when that team was project team was focused on how we wanted to basically give it like that That's why the Nephi pipeline data pipeline is basically Integrated with that and also given an opportunity to basically create And do that modeling on on jupyter notebook also So those are the things which has been taken care on the Clio release And then at the time of the Demeter that is which is just launched on june The the team was looking at like and how do we take this into a little bit of a much more hierarchy in a higher level Which is basically a cloud enablement. So the the whole entire platform is now Can be deployed in a kubernetes based environment Whether it is your on-prem solution or whether it is basically A cloud native like an assure or aws which is running on a kubernetes So that is one in addition to that Model deployment is also because most of the organizations are mostly aligning with the kubernetes So you have an option to basically deploy your model through kubernetes So that that is a feature which is basically came in the Demeter release. Okay And then the next is Just talks about like in a high level how the platform user flow is right Um, and of course you have a toolkit data And mission learning engine where basically macros is a repository macros is generation and everything And how it has been basically perceived. So if you look at it, you have a Analyst who basically predominantly focus on a data and as well as on the mission learning algorithm And their focus is basically on that side Okay, and they will work with the modeler to basically create that model And and once that is done in end of the day, it basically goes to the Induces community and the induces community doesn't mean just not necessarily an individual It could be a business organization or a department which basically needing to run that algorithm to basically Get a necessary result. So that's that's a user flow if you look at it So modeler is involved and analyst is involved and then the user community is involved here So this is like an high level flow Okay And then I'm going to the next slide Which is of course talk about the highlights of the the Demeter open source release and and if you look at it like in a platform cacd on a Go when it is and and model deployment is also have a feature to basically work on SOA JS And and then model workbench like an a predictor and ML workbench And then one of the other key features are like a licensing where you can We can we go to a little bit further. We can basically talk about the licensing in an in a high level For example through the through the federation a company a akumos in Platform can federate with the company b um akumos platform And then they can start exchanging their models between a and b And then they can also basically going through the license manager for tracking And and whatever the terminology is and whatever the licensing term they want to work on they can do that And in a way that way it will help the between the two organization company a and company b to basically Are monetize the the model mission learning model. So that's the whole entire Is is about the licensing Usage manager and then our right to use is the the terminology which we are using Whatever the terms they wanted to use it for and then some of the key statistics are on the right hand side Which is like an harmony lines of course, which is basically developed on the demeter release and how many projects Subprojects or model we wanted to call it and then Uses the stories and tasks and how many epics has been addressed. So these are the the high level on this one okay And and and the next slide is basically talk about a technical eye chart Which is basically a very granular level of Platform architecture. I know we saw one four quadrant which talks about like in a high level But this is very specific about In in granular level On a four biggest packets of the the Akuma's platform like an onboarding And descent studio and then the validation and and and then I get a portal marketplace Which is basically do that and then you can also see the the gateways and interfaces Like an e1 interface and e2 interface and then e6 Which is basically talking about training and as well as the deployment model deployment And then the federation is a e5 So each of this thing is there and then in the right most column bottom You will be seeing the the lump which is basically sitting outside of the platform Which manages the the what you call license usage manager So this is this is looking at a very high level And the toolkit support libraries are on the left panel And then the federation is basically talk about that is also another element Which talks about how the models are basically Integrated and it can be a change between the company a and company b And it is not just going to limit you between company a and company b You can have a multiple federation with the multiple organization And and you can also have a private catalog between the company a and company b because you don't want to probably want to share You know the company a and company b is catalogs with the company c So that those are the things which you can do that here Okay And then the next is In in the cloud migration and how we adopted and how we basically worked on it is basically laid out And many different layers starting from the infra layer Infrastructure layer and then what are the application services layer? So and like I said, um, it was basically kubernetes enabled Helm charts and Helm charts is basically bucketized in the two groups The core elements and as well as like a dependent dependent element and then the application services So the every services are basically bifurcated And and and then allow the the community to basically take it and then utilize it Or if any of the existing nature you for example If you already have an e-fee pipeline or if you already have a jupyter notebook Or if you already have an access repository Then you don't have to basically create one you can see how we can basically leverage and utilize that Existing footprint. So similarly the database also And those are the things that's been just laid out here. Okay and then if I go to the next one and And this is basically just talk about in a high level Um, how This akumos is basically Help any organization. I know There are many solutions available in the native in a commercial or From the enterprise option offerings From aws and ashore and as well as IBM or google or a many many many organizations are doing that It is not limiting you to utilize them And akumos also have a space to work and work along with that because there are Native APS and exposed and external APS allowed here And exposed here, which will allow you to basically integrate with the aws or ashore ml So we are also constantly evaluating and we are also looking at an option as we progress and then basically And not upcoming releases and see how we can basically give that our akumos platform to basically Work on with the natural Um, commercially available and the enterprise grade products. So that's that's what it is talking about And and then um, this one is the one which I Talked about a little bit earlier about the federated training and it's basically You you have a um model Which is company a and company b. It's not only you are basically Sharing your model between company a and company b for example company a wanted to basically Sell the algorithm to company b with the necessary Licensing feature or whatnot. However The company a whatever the algorithm it has has it does not have the enough information or privilege option to access the Data where that company a has a company b has it So this federated model federation allow them to basically move the model through company b and then The model can be basically trained and retrained or learn on the continuous learning on the company b environment And then basically refine it and then it can come back to the company a for any kind of Modifications or fine tuning or updating. So this is the one mechanism for a a reiterative process So this this particular federation helps that one. So this is what it is Okay, I I know I I give any enough um high level features on the akumos platform And I just wanted to basically go over um a little bit on the Use cases and we are coming very close to the Timeline before we go to the q and a I wanted to have quite a bit of time for the q and a let me go to the Use cases samples examples Some of the use cases which we are currently leveraging the akumos platform And of course, it's basically the platform is there. However, the model over the one which we are basically leveraging it One is of course, the the virtual machine lifecycle management It basically that particular vm model algorithm machine learning model basically track and monitor all the virtual machines Benchmarking and performance and it basically predict Any kind of a failure and alerted for the routine maintenance or anything So that was one of the great use cases which was basically deployed and trained in the akumos platform And then basically deployed it into our users In in segment and then basically monitored it another one is image classification And that is one other model and the last one is a sentiment analysis. So these are the one Those are the three use cases high level use cases And in addition to that, there are very specific use cases also we have For other other community open source projects. This is An oar and rick side for the Orchestration managed orchestration area So akumos platform basically gave like in a microservice And platform was able to basically federate with them And and then basically cattle there are the microservices to them because the the the war on community itself Do not have a capability to basically create a microservice Instead they are leveraging the akumos platform to Generate the microservice and take it to that next level similarly we have with one app also and and same akumos platform basically worked on it and create the microservice and passed on through the federation and moved on to the own Akumos instance and then it basically taken that microservices and added some of the specific need for the dcae In order to basically have that model run on the dcae platform And then design it and then basically they were able to successfully run that So these are the other classical example How the akumos platform can work with the collaborative another open source projects? So that was that was one of the things Right and and and then we also have another example the rick Oran rick elegant age level deployment where we can able to basically deploy the the microservice on the rick integration rick model Okay, and then Let me go to the Next slide which will talk about one of the features of the design studio in In micros sorry in akumos We call it as akucompost where you can chain the microservice models in this use cases There are few of the microservices basically chain together If you look at it like a fraud solution image Image mode and then sentiment mode. So all of these models are chained together And basically gave a one one one bigger microservice with the model runner coupled them all of them And gave a one integrator solutions And if you look at the next one, it gives a classic example, right? It basically gave a like a face reduction It basically gave an option to face blurring and then and then it also gave a option to give a Which region or like what kind of a Apparel they won't and and also they are moved whether they are in a sad mood or happy mood And those are the things so so this is an option. I'm just a tiny example So this given an opportunity for a community to basically chain the models And then basically serve the purposes rather than creating a One biggest algorithm and then worry about that Performance. So this is like another option Which they can do it And then we are we are pretty much come to close So these these are the main resources of the L of AI and Akumos the first Links are basically L of AI foundation A link another one is like a wiki page of the L of AI foundation where you can see the different committees and their actionable meetings And their action items and then meeting nodes and as well as any presentation and everything And then the other things below mentioned on the three of them are basically I'm very specific and narrow to the Akumos project whether Akumos dot word g which talk about the whole entire project or we can talk about the wiki page where that project meetings and then technical staging um Diagram architecture discretion community discussion and everything is there and then documents carry all the release document and I Sulla say everything else and so that's Pretty much it and I would like to go to the next screen And we are pretty much close to 40 minutes and I believe we have 10 plus minutes left. Let me go over for any questions or anything I'm just looking at Okay, let me just read out a question from One of the gentlemen mr. Kreiser How do you solve problems such as gp gt gt pr compliance and privacy laws and with the federal model used here um, so that expectation is actually of course the akumos itself do not have any kind of a compliance or mechanism or anything embedded into the system and Our expectation is in a company a or company b basically bring in the model algorithm We are expecting what this moment to basically make sure that they follow All the data compliances and making sure that that is basically coming to play at this point But and if there is there is an opportunity and if any other open source tool Which is basically have a cross reference down the line Yes, we could potentially take that and then and then do the AP level interface and allow them to basically And scan through or or allow them to basically screen through I hope I answered your question here Let me go to the next one And I see akumos platform only supports ubuntu Is it sent os or any real distribution supported? on on this one Um And a personal experience. I would like to basically give an update on it Yes, of course, um the platform is developed on ubuntu. However Our tech mind that a team has basically deployed in sent os and it was working fine. And in fact And yet indy's also tried on the open shifter, which is basically Based out of rel they supported it But there is no official support But I don't see them having any problem and running at because it's all basically based on a linux kernel and um In how can I business access this service? Um, and like I said This is like a complete and open source project. Um, they can basically go and then Take it from the akumos.org or wiki.org and there is a The links which is mentioned on the presentation will allow you to basically go and then download the whole entire Um at the platform and through the repose and then you can deploy it on the organization as an open source Okay And it it can be white labeled um because it's an open source and then you can basically take it and this is from another gentleman ron b And for example, uh, our organization. I belong to tech mendra. I work for them Our company basically took it and akumos and then basically branded. Um, As our enterprise product as an open source product Uh, however, I'm not 100 percent sure that whether you have to be a member of LFA or anything We can basically double check with the LFA umbrella And to that then if you would like to want to basically check with me later Um, I can see whether I can connect with you to the LFA umbrella team. Um, and then, um That's pretty much it the questioner is wise. Um, if anyone have any other thing or any other open discussion I'm pretty much open and and and and I believe we are pretty close to the time. We're just five minutes Five minutes early Yeah, I'm I'm just open to basically for any any questions or anything which you wanted to basically ask here I'm going to be available and I'm going to stay on this call for the remainder of the five minutes For the session to be wrapped up And let me look into it Sure. Um, uh, so the the meeting will be ending in about couple of minutes Before that I would like to basically tell the team and as well as the participants. Thank you for all your attendance today and you can always check. Um, and Chat with me on the slack on through the open source on the ELC slack Option and I'm available for any of your questions or any of the narrow ones And also there are many other AML track um Topics throughout the Uh, so always a summit. So please Do attend that and then basically get more information. Okay um, I think, um We can we can we can close the session. Thank you. Thank you all