 Hello and welcome everyone to 2021 Linux Foundation open source summit happening in Seattle. Good morning, good afternoon, good evening depending on from where you are watching and and thanks for joining us to learn. How to build an AI marketplace on top of communities myself is an image. I'm the CD for Watson data and open source. Platform efforts and with me, I have my colleague Christian, Christian, why don't you go and introduce yourself? Hello, my name is Christian, I work with an image. We both work for the center for open source and I and I have been working on several projects together. And this latest project, the machine and exchange is what we're going to present today. Okay, thanks. Yeah, so let's get into the business as Christian mentioned, right? We work for a group in IBM core code, so we'll probably get started with that. So the topic of our session today, machine learning exchange. And if you're sharing this name for the first time today is when we are actually, you know, bringing it up for the first time in a public domain in a conference, right? So let's just go right through it. So we talked about code, right? Which stands for Center for open source data and technologies, you know, it's a group in IBM focused as the name suggests very much on open source data. And it includes many technologies, including some very popular open source projects like Spark, TensorFlow, PyTorch, you know, Q-Flow, and, you know, we work in these ecosystems and we're responsible for, you know, ensuring that what we need as part of, you know, making these projects work in Watson also, you know, what we need in terms of enhancing these projects as part of the community request which come in. Now, the beautiful picture you'll see there is of IBM Silicon Valley lab, you know, it's part of Silicon Valley, though it's much away from the hustle and bustle of Silicon Valley nestled in between green mountains. We do have our own cricket field as well as you can see. So and a lot of wood hiking trails nearby. It's just in the middle of nature. So if you want to live in Silicon Valley and work in Silicon Valley and not be bothered about by all the hustle, bustle and traffic, this is a place to be. So I think let's get started with the topic. And if you are familiar and, you know, as we are looking in the machine learning in AI, we're essentially at a very fundamental level, what we are doing is, you know, we are using data to build models which are then automating decisions. Right. And this is throughout the AI life cycle, right? We use data, then middle models, which help automate decisions. Now, this is a high level view of the AI life cycle. But if you look from the pillars of AI life cycle, data sets and models totally stand out. Right. And when you zoom into this, you see that, hey, these steps are probably not as simple as it appears at the very first outset. You know, each of these different horizontal streams will be data preparation, for example, which includes data cleansing, data ingestion, going all the way to transformation, feature engineering, doing data splitting. Right. That in itself is a huge field in itself, right? There are multiple products and multiple companies just specializing in that particular space. And then moves and comes the machine learning in AI space, which is all around, you know, how to create that initial model. Then, you know, launch distributed training, how to do hyperparameters, optimization, how to do neural architecture search, then validating your model, that whole model piece, where you are essentially, you know, just in the machine learning and AI model creation phase and running distributed training, finding the right hyperparameters, that in itself again, you know, this is a field where a lot of the products and a lot of companies specialize in that area. And last but not the least is essentially the area where, you know, you have models deployed in production, but models are like applications are living and breathing entities, right? You deploy an application, you send in the same input six months later, as long as you haven't changed the application version, you're guaranteed to get the same output. Not so much in the case of models, right? You have the same application, same input, but six months down the line. Even if you haven't changed the model, right, it is going to give you vastly different output because models are living and breathing entities, right? And hence, you know, this process is very iterative, you need to be doing it again and again, you need to be rolling out Canada versions. Sometimes your data set is changing, sometimes there is anomaly, so you need these automation pipelines, right? So as we talk about the three pillars, right? So we saw and definitely understood that data set and models are the pillars, but pipelines have become very, very important when we are looking at, you know, automating the whole data and machine learning life cycle. So that's where, you know, we sincerely believe that, you know, machine learning pipelines, data pipelines, they are the third pillar of this whole structure. So data sets, models and pipelines. Now, how do we speed up this AI life cycle, right? Now, with a number of steps, as we just discussed, needed to be performed in data and AI life cycle currently, right? The process itself remains bifurcated amongst various teams, right? We have parts of the team which, as you saw, are doing, you know, data part of the life cycle, parts of the team which are creating models, parts of the team which are doing feature engineering, creating set of new features. And because of these bifurcation and long horizontal life cycle processes, what is happening is, you know, there is a lot of duplication and redundancy. A lot of times, you know, you have similar set of features being created. A lot of times you have similar versions of the data sets being created. The models being created are very similar pipelines, more so the specific pipeline tasks. There is no reuse. There is no sharing. And everyone is working in their sidewall. Now, so it's becoming very clear that, you know, there is a stronger need for a central data model and pipelines catalog, right? Which we can share and use across organizational boundaries, and not only organizational boundaries, also across different parts of the data and AI life cycle, right? So the teams who are doing and creating features in the data life cycle are actually be able to share some of those things when they are in the machine learning part of the life cycle. Similarly, you know, all these different models and data sets, engineer data sets, they are being shared. So now that we need a central data models and pipelines catalog, right? And the other part is, you also want a strong governance and traceability and lineage, right? And why is that important, right? When you go on the internet and if you search, you will find all kinds of data sets. You'll probably find, you know, thousands of models if you Google for it. What is missing is audit check, a quality check, having a proper licensing mechanism. Can you take this data set and can you use it right off the box without worrying about licenses? How all these sources being identified is the proper lineage track as part of the data set metadata, right? You want that all part of, you know, this central catalog which can tell you that, hey, if you are picking up something from here, make sure you are sure that, you know, it has the proper license. It has the proper traceability and lineage. You can find, you know, how the data set has traveled or the model has traveled. So that is also, you know, very central piece of this catalog which we talk about. So with that, we are announcing the machine learning exchange, right? So as you saw, as I mentioned, right? If you're hearing this name for the first time, yes, that's correct because this is the first time we are actually announcing it. We are joining hands with Linux Foundation AI and data. We are announcing and moving this project in open source. And not only we are moving this project in open source, we are actually moving in open governance. That means, you know, the license, the trademark, everything is in a central place and being owned by Linux Foundation AI and data and we will be working with the larger community and the partners. They are in the next foundation AI and data to jointly evolve this together, right? Because we firmly believe that to create this ecosystem of data sets, models, pipelines, you know, we need to do it in a totally neutral way where the foundation has the ownership rights. There is a neutral license and we work as a community member in terms of enhancing this. So machine learning exchange is the project jointly, we are announcing with Linux Foundation AI and data. And this is how it looks like, right? So I talked about pipelines, data sets, models, definitely as we are aware in the data science stream. Notebooks is the language, right? How you actually write data science? I mean, I think we don't need to debate it anymore. Almost everyone is using notebooks to create a lot of the initial models, right? The data science code, right? And that's also, you know, available as part of the machine learning exchange and you can go and look at all these different architect artifacts. Let's go through them one by one. So and before we dive a bit around the architecture, right? So and some of the capabilities. So what does machine learning exchange actually provide, right? So there is a read-only version, which is the hosted version. You can actually hit a website, go there and that's a read-only version where you can look at the assets and browse them, right? But there is also a version which you can actually, you know, pull from GitHub, deploy it at your own end, right? Which allows you to upload, register, execute as well, right? So you can upload your own assets and register them and then you can launch them, right? Including AI pipelines, models, data sets and notebooks, right? It generates, when you register, for example, a model, it generates automated sample pipeline in behind the scenes, right? For example, to deploy your model on top of a community cluster or care serving, which is an embedded engine, right? So pipeline engine here in this case is powered by Qflow Pipelines, which is a very popular open source project for machine learning and data pipelines. And we are using Qflow Pipelines on Tectron under the covers to actually power this. Then the serving, when you are actually deploying your models, that's powered by another very popular open source project called care serving, right? And we'll talk a bit about care serving and pipelines in the detail later, right? And then there are other projects which found the basics of it, right? But at the very high level, there is a UX, there is an API server, and then, you know, backing all these assets, metadata is a relational DB and the assets actually. And it's the metadata is being shared between the object store and the relational DB. So let's talk about, you know, how pipelines work. Essentially, you know, this is the pipelines tab, as you can see, you can go and search for pipelines, you can select a pipeline. And then, you know, after you have a look, and this is the pipeline you want to run, then you can actually launch it, right? And in some cases, you don't like a pipeline you see, then you can actually go ahead and register it in the executable version of it, right? If you're standing this platform at your own there. By launching the pipeline, you can select the parameters and input your own values. And then the pipeline is launched behind the scenes. It's essentially using the Q-flow pipelines on Tecton Engine, and giving you logs being streamed in real time, giving you metadata, giving you visualizations, and, you know, creating a lineage, right? As your pipeline is running, you can have a lineage view of all the artifacts being produced as part of that. Essentially, pipelines are made of pipeline components, right? So we also provide a way for you to actually register your own components so that you can share and reuse across folks, right? So if there is a component to, in this case, what we have is a very simple one, like, you know, it's just echoes. But you can think of, like, you know, some components, some things you are doing again and again, like you're creating a Kubernetes secret, or you're downloading a dataset, right? Some of the tasks which are done again and again, which need to be plugged in into multiple pipelines. This is the best place to come in, plug them and test it out, right, without having the need to create a pipeline. You know, you can just register your component, launch it, test it out, that it works, and then, you know, you can reuse it as part of the different pipelines. Models. I think, as we talked about, right, models is a very, very important, the end piece of the puzzle, right, when you go through this big data lifecycle in the end, you're producing these models. So there are a lot of pre-registered models which come as part of machine learning exchange, around object detection, around texture sentiment classifier, etc. So you can pick and select from them, or you can register your own models, right? Now, once you launch the model, you can look at the model description, and we also generate, because the model metadata YAML based on that generates some automated code so as to be able to deploy your model. You can deploy your model on Kubernetes cluster, or, you know, the embedded KF serving engine, which comes as part of the machine learning exchange, right? So we can select a model, and then just launch it, right, give it a name and launch it and deploy it, and it will essentially, you know, deploy it on the underlying embedded machine learning exchange models on the platform, right, which is powerful. Now to deploy it again, it will use the shoe floor pipelines engine, right, which is essentially taking your model bits, downloading it, and then pushing it on the, either the Kubernetes directly or the embedded KF serving engine, which comes with machine learning exchange. Now, datasets is one of the most important pillars of this lifecycle, right, as we talked about, so you can essentially go, browse the datasets, look at the details, look at the metadata. In this case, we are looking at JFK weather dataset, which is essentially, you know, focused around all the different climate characteristics, temperatures, coordinates, humidity, et cetera around JFK airport. You can select this dataset, you can launch it. Now, what does launching mean in the context of a dataset? Essentially, the default functionality out of the boxes, it's going to download this dataset onto your Kubernetes or OpenShift cluster, wherever you are running machine learning exchange. You can download it and create a PVC persistent volume claim, which can then be used by rest of the parts of the lifecycle, right. So for example, if you need to launch distributed training later on, right, the dataset is now downloaded and made available on part of your on your Kubernetes and OpenShift cluster, it creates a PVC for that for a persistent volume. And as we go through this, like Christian is going to come in and show some of these assets as well. And similarly, notebooks, right? So we essentially, you have the similar mechanism, you can launch a notebook, it will execute it as a batch process. Behind the scenes, we use a project called Ellyra and Ellyra's notebook company to actually take your notebook, treat it as a single batch process and launch it using the Q4 pipelines under the covers, right. So that's our notebooks work and you can also see it later, you know, as we go through some of these steps. Okay, and now, you know, this is the part which is showing you, you know, how you can launch a notebook. By launching a notebook, as you saw in our dataset, I talked about, you know, the dataset which was downloaded on a PVC here, you can launch that notebook which holds that dataset from the PVC and then runs analysis on top of it, right. So this is the notebook which is, you know, pulling the dataset, the JFK dataset from the persistent volume claim and running analysis on top of it and you can see the logs being streamed in real time. Okay, let's talk about a bit about the catalog, the actual content, right. So so far we showed you the framework, how does it work, what are the capabilities, and this one, you know, goes a bit into the catalog and the content behind it. So as I mentioned, right, there are some default samples which come pre-populated, you know, models like around object detection, image caption generator, resolution enhancer, weather forecaster or you know, datasets like code net, finance proposition bank, datasets, JFK dataset, then there are pipelines and pipeline components, right, which we have created, which are around, for example, Trusted AI umbrella, which allow you to detect fairness on your models or do adversarial robustness check on your models, right. And then also, you know, there are pre-integrated sample notebooks which come as part of it. Now, obviously you're not limited to these, but you're running your own instance, you can come and register your own assets and use it with machine learning exchange. And, you know, calling point here that though it comes with some pre-populated assets, the goal here is for you to bring your own assets and register as part of it. Now, let's talk about datasets, right, so as we talked, right, it is one of the most important pillars of, out of the three we talked like datasets, models and pipelines, right. Now, if we see the actual machine learning revolution is fueled by data, right, if you look at something like ImageNet, right, in 2009 there were 3 million images with approximately 5,000 classes, just three years later in 2012, that number has increased to among this 14 million images with 22,000 classes, right. And that essentially fueled what we call, you know, a revolution around image classification, image reduction and all these models coming out, right, where you see, you know, it actually started surpassing human level capability on some of these narrow tasks. So, if you see the graph here, for example, you know, at some point in 2015, right, we started actually surpassing human level intelligence in terms of ImageNet classification errors. So, the point here is like, you know, the revolution and fueling that dataset with large quantity actually helped to get to that level of accuracy and these sophisticated models being produced, right. Now, even if you look at the progress in general on AI, even when we talk from the IBM's perspective, like 2011 we had the JAPARDY, right, where essentially it was more structured data in terms of, you know, the question and answering or in case of JAPARDY like answers and questions. But by 2019, we had project debater which were actually able to work on unstructured data, be able to debate on logical rational arguments with the debaters, professional debaters, right. So, to get that kind of advancement, we need, you know, that huge corpus of data. So, as I said, right, we have seen the power of this AI which was being applied to human language, right, even if you look at it from the perspective of the speech performance, right, and you look at the first DNN back in 2012. And now when you look at, at somewhere in between 2016 and 17, it actually surpassed human capabilities. All the voice and conversation back and forth, which for example we provide in a product called Watson assistant or document understanding all these things, you know, starting to surpass human level capabilities with the explosion of data, right, and then starting creating a huge market for them. Similarly, code is the language of machine, right, AI will help us also to master code, right. So what we need essentially is, you know, AI and machine learning models, right, which can bring the same revolution, what happened on ImageNet, what happened with speech, back to code, right. So essentially AI for code needs its own ImageNet for breakthroughs, right, we need code language translation, we need to find if codes are duplicate, we need to find, you know, they're similar, we need to find areas where code can be improved, performance improvement, memory improvement, or, you know, the holy grail where we just want to give description and one machine to generate code. Now how do you get to all these, you know, sophisticated models, you need that huge corpus of data set. So with that, you know, IBM back in May announced Project CodeNet, which is a very high quality code data set for algorithmic innovation, right, it has around 14 million code samples with around half a billion lines of code across, 4,000 code problems, and 55 programming languages. So if you see, there are, you know, this is this huge innovation and breakthrough which was done as part of this. And that actually gives you a lot of these capabilities, which now you can create a lot of this sophisticated models where, you know, on these diverse classes of problems. So, coordinate is actually the largest, you know, open source data set available for AI for code, right, and, you know, it works across polyglot languages, multiple languages, as you can see, as mentioned here. And by virtue of that, as I was talking about, you know, the content which we have in machine learning exchange is very high quality. So, you know, coordinate is an example, coordinate is available through machine learning exchange, you can essentially download it and use it, and you know, later on we are going to see a quick demo, essentially involving coordinate as well. You know, you can see the description, you can look at the metadata and go through some of the lineage and traceability and make sure the license is correct, all that information is available to you in this single place. So there are associated notebooks and models, right, so there are associated notebooks and models which we are providing around coordinate and we will be expanding this ecosystem. And this was essentially to highlight the fact that, you know, the content which we are actually putting in machine learning exchange is of high quality, right, and we are going to be working hard at it to make sure that remains of that. So the integrated technologies and I wouldn't spend too much time on these. This is data sheen, which essentially is used behind the scenes to actually download the data set and create a PVC. It's a project built on the Kubernetes custom resource architecture and part of LFA. Chef serving, this is the engine on which your models are deployed when you actually choose to deploy a model, you can deploy your models on Kubernetes natively directly. But if you need more for sophisticated deployment technology, this is, this comes pre-built and pre-integrated with machine learning exchange. Pipelines, right, which is at the heart like everything we are doing for any asset in machine learning exchange is being executed through pipelines. And that engine is powered by Qflow pipelines on Tecton. I will spare some of the architectural details here, right, but as you can see, this is a Qubinative pipeline engine built using the Kubernetes custom resource architecture. And if you're aware of the Qflow pipeline ecosystem, it's very popular with data scientists as well because it provides the Python DSL to program using Python. And this is what we use heavily within machine learning exchange to power all the actions which are happening with any single asset, right? Whenever we are deploying a model or we are downloading a data set to create a PVC or we are launching a notebook, everything is going to be triggered under the covers using this pipeline. Since we are on the topic of pipelines, we have an enterprise product called Watson Studio Pipelines, which essentially is built on that engine where you can essentially get a lot of the pre-built notebooks, which can run much sophisticated Watson capabilities like Watson AutoAI, data refinery flows to handle your ETL needs. You can actually run web service and online deployments, batch deployments, and it also gives you a very solid drag and drop interface to drag and drop different components and create your model, different pipelines. So definitely check it out if you're interested in pipelines and the pipeline's ecosystem. Okay, with that, I will pass it on to Krishan to take you through some of the capabilities of machine learning exchange. Over to you, Krishan. Thank you, Animesh. That was great. Let me share my screen. Can you see my screen, Animesh? Yeah, again. Fantastic. So what we see here, that is the user interface of the machine learning exchange. Like you've seen in the animated gifts that Animesh showed in this presentation. And you can see on the left hand side, it's our navigation where you can navigate through our individual asset types. As you have been told, we have data sets, we have models, pipelines, components, and notebooks. For the demo part, I will start with the models, because the models are really at the heart of anything in machine learning. And we have the benefit of a sister project here at the IBM Center for open source data and EI. And that project is called the model asset exchange. And we're lucky enough to feature those models here in MLX. And I will show one of them, these models. They have been containerized. So they have been pre-trained. And they can be served in principle. If you click on a model in MLX, you'll see a description that tells the user what this model is about and what the framework was that it was trained on and what the license is. And you can navigate to the website where there's more information. In this case, the model asset exchange website. Each of our assets are based on the ML metadata. So when you upload a new asset type, that metadata will tell us what to present in the catalog and also what can be done with the particular asset. So for this model, you can see this model can be served. It can be served either on Kubernetes natively or with KServe. And I will always also show what the images that the Max team has built for this model. If you're interested, we also generate some pure pipeline code. That's the code that will be used to launch the pipeline that in this case will serve that model. So let me go to the launch dialogue. It will ask me a few questions. In this case, I want to serve this model on Kubernetes. I can give it a run name, but also stick with the default. And now you will see beautiful pipelines graph of user interface where you can see the execution graph of that pipeline. This particular pipeline starts with a configuration point where the model configuration will be generated. And once the part is done, the model will be served. If we are short on patience, we can go to a previous model run that I did just before this demo to save some time. The pipeline will take about a minute or two to complete. And once it's complete, we will see that model has been deployed. And in the input output parameters, you can see that this model has been served on on the local or an IP address from the cluster, which is accessible. And you can go to that model. And the model will have a straggler generated UI that allows to get some metadata on the model and also to influence using for inferencing on pictures. So this particular model is an image caption generator. So let's try it out on an actual image. On image and I took the liberty of getting your profile picture. And I'm going to see what this model will do. We can exit with this model. It will tell you it's a Hollywood movie star Hollywood movie star almost it says it's a man in suit and tie and smile. Okay, at least it finds me smiling. So that's that's good. All right. So this was one example of what you can do with models that have been pre trained and containerized. As the second part of our demo, we will show one of our pipelines and one of a very, very important aspect of machine learning, of course, is that, you know, not only that your models are accurate, but also that they are fair. So we have this trusted pipeline. And that makes use of two other related projects that we've been helping out on. I've been research has put out 2 projects called the adversarial robustness 2 box. And I found this 360 and both of these capabilities are part of this pipeline. On the right side here, you can see the pipeline that was compiled into the format that is more for machine read readability. Not for you would be the best in this case, but we also have the details for this pipeline and we are able to launch this. In this case, we have preset most of the parameters and you can run with it before parameters. And once you could submit that the 3 steps of the pipeline will be executed and it will start with training. So this was my previous run. It shows the model has been trained or think a sample data set of 20,000 faces of all kinds of ethnicities. Both all kinds of genders and age groups. And after that model has been trained. So this pipeline performed a bonus check. And a adversarial robustness evaluation, and it will also show the metrics that got generated with this in this evaluation step. Now here you can see that. The model accuracy on the test data is about 86% pretty good on the samples that we were feeding to that model. It's not so great because we only did, I think, five epochs of training iterations. So that particular model would be would not be very robust against adversarial. I think the metrics there are like highlighting very clearly that, you know, the, the, if you're generating adversarial inputs and and sending it there. The model accuracy drops heavily right so it's it's definitely vulnerable to adversarial attacks. And that would necessitate that model gets, you know, retrained and make more of it. And I think the adversarial robustness toolbox has tools for that as well. Now on the family side. We see a similar output similar metrics. And this is a gender classification model. And the classification accuracy here is about 86%. And one interesting metric here is the disparate impact, which is 91.91. And if it's, you know, I think between 0.8 and 1.2, that means it's, it's not biased, particularly either way, which means that this is, you know, a fair gender classification model. There's no by which was raised or each in this case, if I understand this way. Yes. And if you use this toolkit, like, you know, there are, in addition to this, there are over 70 plus metrics on which you can, you know, evaluate the fairness of your models and find out whether they are biased or not. And if it does found to be biased both with adversarial robustness toolbox as well as fairness check if you if they are found to be vulnerable to adversarial attacks or if they are found to be biased. So all these toolkits actually provide mitigation algorithms right so you can now implement. So there are 10 plus algorithms for example in your fairness to to mitigate bias. Yeah. Thanks. And I think, as last part of our demo, we can showcase our project code net. And notebooks that we associated with this. So, from the project code net, which is, which is extremely large as I mentioned, we have two subsets, one subset for a mass language model and one for a language classifier. Now that language classifier basically, you know, classifies a code snippet. And the output is web or not that code snippet is written in Python Java CC shop. And we will see here some, some more description on the program data set similar to what I'm going to show you this presentation earlier and links to all of those data sets. It will show the license the size of the data set. And as with all of our assets. We have a YAML file, a YAML file describes the assets so that MLX knows what to do with it, what the display. And when you launch the data set, we will use data show under the cars. And we will download the data set and mounted to your persistent volume. And that can be used later in other asset types. For example, the notebook they will show in just a moment. So you want to fix submit. There should be a two step pipeline. So one, one of the pipeline steps takes our metadata that we use in MLX and converts it to the metadata we need for data show. And then once that data set metadata has been generated, data show will be used to monitor. So earlier I ran this. And after the metadata was generated the persistent volume was created. And you can see that you can see have an identifier. You can either copy that and use that, or I can go to the related asset of the data set. The notebook that we have for this one is also called a language classification notebook. They're similar to all of our assets of our asset types. And this notebook can be previewed here with our, with the Jupiter notebook viewer that we integrated. And this is a read on view, but you can see all of the code cells in the notebook. You know, you see some outputs. And at the end, the notebook should take a bunch of code snippets and then run through the classification with that particular model and launch this model. Plug in that data set we see that we created earlier mounted to a local folder and submit this will start our pipeline. And that's once that's once that pipeline one is picked up and there will be a part created on the Kubernetes cluster for that and once that associated image has been pulled and the run has been kicked off there should be locks streaming in here. That will take a while and because that does take a while. I have the output of that notebook that I generated just before this demo that notebook gets regenerate. The pipeline will generate an HTML output. So once the notebook has been run, all of the cells and the output of the cells and be seen in that notebook. And then at the very end of the notebook. Typically, you can see some of the metrics. You can see the training and delegation accuracy. And for this test, for this test, there were 100 code samples with 10 samples for each of these languages here and so on. And you can see that test here was fairly successful. All of these for each of the language all 10 samples were classified correctly. And the notebook that went into this training. However, only had actually a lower classification so that's interesting. So if you go into that notebook, very bottom. And before we ran this, you could see that the code samples that were chosen to previously only had nine correct see samples and nine correctly identified to the shop samples. This was my demo on image. I hand the screen back to you. Thanks, thanks a lot. Thanks for walking us through the capabilities of machine learning exchange tradition so hope you know you all enjoy it. Learning about machine learning exchange and the capabilities it has to offer. Definitely, you know, reach out to us if you have more questions, if you want to try it out later as a lead only version which is hosted at ml dash exchange. Org, and if you want to, you know, give it a try on GitHub, get up.com slash machine learning exchange, right? That's that's the GitHub organization where you can actually go. But if you have to remember one link ml dash exchange or just hit it out and it will take you everywhere else. I believe you have a slide for that. You can share your last slide. Okay, let's bring it up. Cool. Here it is. Right. So these are the links you need to remember like, you know, where to go. Remember, it's a read only version. So a lot of the capabilities you are seeing as part of questions demonstration. They're not available in that read only version you can definitely download the assets browse the assets go through the description but if you need to do more. Go to this GitHub right and actually get your own version running right so yes you can look at the read only version here on ML exchange dot or and go through the different assets but better still like go to this machine exchange. Get this project get it going on your laptop and actually use Docker. You know, to get it running on your laptop or if you have a Kubernetes cluster that's best because you know that gives you the full load capabilities and also allows you to register and use your own assets right so strongly encourage that. There is one more thing which I do want to highlight right before we go away there are some. Awesome talks happening from IBM code a team right throughout the open source summit North America right talks, which go deeper into, for example, the cure flow pipelines on tech on project. Or you know how to build a feature store using feast and care serving right how do you actually defend against adversarial model attacks using cure flow and adversarial robustness toolbox which we just talked about right. And so please go ahead and join our other talks and give things a try and provide us any feedback and we'll be glad to see you on as a contributor as a user of machine learning exchange. Thank you.