 Thanks. Hello, everyone. Good morning, good afternoon, good evening, depending on where you are in the world. Thanks for joining today's CNCF live webinar towards hybrid cloud serverless transparency with the LithOps framework. I'm Christy Tan and I'll be moderating today's webinar. We would like to welcome our presenter, Gil Vernick. He's a cloud and data expert with IBM. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box in the platform that you'll be able to submit your questions through there. You can also submit them through the chat. Please drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of your fellow participants and presenters. Please also note that the recording and the slides for this webinar will be posted later today to the CNCF online programs page at community.cncf.io under online programs. With that, I'll hand it over to Gil to kick off today's presentation. Take it away. Thank you very much. So my name is Gil Vernick. I'm working at IBM Research. I active in open source, contribute code. And my recent focus is hybrid clouds, all kinds of big data execution engines and serverless computing. Part of this work that they present today we develop is part of the cloud botany project. It's a very interesting project with many participants and you can see here in the picture. And all the outcome of the project and all the documentation, some things we develop there are open and you can welcome to see what we develop there and what technologies we do there. And obviously, whatever I present today is an open source and everything, all the videos are available and also the code you can just try it after the talk or anytime you want. A special thanks to Josep, Sampe and Pedro from a university URV who are very contributing to the project and from the open source aspects of the project. So the title is, you know, it has a lot of fancy words, right? We have hybrid clouds and we have a serverless transparency and this mysterious need of framework. But this is what I will cover today and hopefully it will not be, you will understand what is it about and what is the problem and what is the solution we propose and so all those on this title will be more clear at the end. So I know that many obviously understand what is serverless platform but serverless paradigm but sometimes the subject is not well defined so just to be on the same page. So it obviously doesn't mean that there are no servers, right? There are summer servers but me as a user, we don't need to worry about them because with the serverless paradigm we just deploy, we deliver our code, our software to the serverless platform, to the serverless provider and then that backend engine will take care of the provisioning and execution of our code and that basically means we just don't need to think about servers anymore and so this name serverless appear. Now, if you see today and it's from the web page of CNCF, so there are a lot of serverless platforms today and some of them hosted, some of them based on the open source. So it's very nice. I mean, you have a lot of options to experiment and to use serverless computing today, both install it on prem, use open source, so go to kind of a cloud providers who provide this to you and I assume this just will be growing even more. Now, so serverless user experience is the key, right? Because the more we focus on the business logic as users and so unless how to deploy, how to execute, how to take care of the execution, but the more we focus on the business logic, so better is our serverless experience and users have different options how they can deliver their software packages to the serverless backends for the executions. So they might just provide their code via all kinds of APIs. They may upload some zip files with their code dependencies. They may also wrap their code into Docker images and upload it to the serverless provider and execute it there in the Docker images. So different options available. But here's what I want to start with. So there are some challenges here. I mean, and some things are less obvious and may require a bit more time to make them work. First of all, so you have a Docker image and you want to you have your code, you have your software, you have dependencies. So obviously you take dependencies, packages and software and you pack them into Docker image. But then you also have your code. Now, sometimes you don't want to use to pack your codes into the Docker image because it might be some sensitive information, it may be some in an open source code, it may be some other details that you don't want to expose. And then the question came, okay, so how you take this Docker image with the dependencies that your code require, but you inject your codes on the runtime or maybe you don't want to expose it. So there are different solutions, of course, available. But if you want to take this Docker image and move it across different cloud providers and maybe run it in your Kubernetes and then you want to maybe run on some public cloud provider and also provide your Kubernetes CPI, then it's not really clear where you keep this Docker image with your code. You may use some private Docker repository. So there are different options here. And it's not well defined. I mean, you still as a user, you need to know how to make this Docker image with your code to move against different cloud providers if you want it and burst your code. And so this is one of the challenges that you just need to address. Now, there is also a gap between business logic and the boilerplate code. So you as a user, so I assume you just want to run some machine learning algorithms on the colors from images. So you write a single function and you test that it works. If this function just take a single image and you extract colors from that image. Now you want to run these functions on millions of images and some located in different storage places, maybe some in cloud object storage, maybe some in your local object storage, maybe some in SEV. I want to extract all the colors and you want to inject them into machine learning framework for further processing. But here comes the challenge. I mean, here are a couple of challenges. First of all, so where you run your code? I mean, you don't want to download those images and run it over your Kubernetes. Maybe you want to deploy your code and run it as close as possible to the data. So maybe you want to give it some object storage in some public cloud. And so where you run your code, local cloud, hybrid, how you collect results and again, how you move as less data as possible. And then you also need to write a lot of boilerplate codes. And by boilerplate codes, I mean, additional codes that not really needed by your business logic because all what you focus now is to extract colors and run machine learning. But here you need to write much more codes, how to access object storage, how to access storage, how to list, how to download, how to read those objects, those data objects. So there are a lot of additional code that you need to do and there's some gap here, I mean, so that you need to feel between your business logic and how to make it happen. And now this millions of images, it's also a lot of complexity, right? How you partition those millions of images, how you list them, how much memory you need to process them, maybe one image needs more memory, another image needs less memory, where you deploy your code, where you run it. So there are a lot of things they need to address when you want to scale your code at massive scale. And this is another challenge that I also need to address. Now, another challenge is the APIs. I mean, so if you see today, you'll have a patch open with API, then you have Kubernetes API, and then you have K native API. So already I mentioned three, but then you also have different APIs. And you need to have different CLI tools, you need to learn, you need to know the semantics of the serverless backend, how you want, how to use it. And then it's also cloud vendor has its own API, if you want to go to public cloud. So there are a lot of different APIs and you as a user that you just wrote your code, and now you want to run it against API that exposes the open this patch, open this API, or maybe another to run it over Kubernetes, you need to implement many things, and you need to know those APIs. So it's add another challenge that you basically need to be familiar with many APIs and perhaps adapt your code. Now, another challenge is the containerization containerized model. So it's not just to take your code, put it code and dependencies, put it into docker image and deploy it over an open shift or Kubernetes. It's actually you also need to take care of it, how you how to minimize impact on your application, because you don't want to rewrite it. Maybe you already have an application, and now you want to scale some code or parts from it, and you don't want to rewrite everything from scratch. So it brings a challenge how you make your application to be in container, to be executed in container, and then how you scale this code, how you decide right for at least them, how many containers you need, how you process datasets, how you partition input datasets, how you generate outputs, how you maybe need some cache. So all those aspects you need to address when you want to take your application and run it over Kubernetes at massive scale. And again, the key point here that you don't want to rewrite your application, because maybe you already have an application and you want to just scale it, you don't want to make something else out of it. And so those are challenges. And from here, I will take you to the solutions, right? And what we how we address those challenges. And I will show you demos and examples, of course. So the framework that we speak about, Litov's framework, it's an open source framework, Apache license. And it's a patent framework that designed to scale code applications, at massive scale against any serverless backends. And the serverless platform can run on hybrid cloud, private public, your private Kubernetes, public cloud provider. And you see here those backends that we have, some of them are from a public cloud providers, IBM cloud, Amazon, Azure, Google cloud, Alibaba cloud, here, Red Hat, Kubernetes. And this is growing, please, we'll be adding more. And this many needed by us from ABM and also University from Taragona, your university. And the goal of this project is to provide serverless for more use cases and also the easy move to serverless. So if you take your application and easy move to those all kinds of serverless backends, and you don't need, don't really need to learn new APIs or new techniques. So the general idea, and we will see demo in a minute. So general idea here, you'll, you have your function and you have some input datasets, maybe now you take the function, you take import Litov's and you use Litov's function executor, then you tell it, please run my function against this input data. And from that moment, Litov's deploy the function serialize it, it deployed to the serverless backend, executed there. It's also know how to partition datasets if needed, and then you just get your results. So this is user experience on the left. And all that complexity, how you move to the cloud, how you run it, it's actually done by Litov's. Now, it designed to scale Python applications, as I said, but Python is not limited to Python. So you basically can run any native code as well, then to be demonstrated all kinds of use cases of Gromax, Protomol, Dlib, GDAL, FFMPag, and so on. Because everything that you can run from Python, you can just scale it the same way. So Litov's expose API, which is one of the multi-processing API of Python, another is futures API of Python. This is what users see, right? You just, you have your function, and to use function executor of Litov's, and then you say, oh, please, I want to run my function hello with the input parameter world, and you get your results. And from that moment, Litov's take this function, take the input data, which is one word here. And deploy everything against the serverless backends that you want to use, which is configured in the configuration of Litov's. And you run it, Litov's run it there and get your results back. And you have also multi-processing API of Python, which is also popular in Python. So this is to show you a demo I recorded it before. And there will be four demos in my presentation. This is one of them, the first one. I will show it in VLC because it shows it's a little bit better. So I have my notebook on my laptop, and I want to run a series of calculations, Monte Carlo calculations to calculate number pi. So it's a regular notebook, just run my dependencies. And this is my business logic code that you wrote. Just let me stop it a second. And so I want to calculate, I use known techniques how to calculate number pi. This is the code I have, and I want to do one linear calculations. And now I want to take my code, and I want to run it over my laptop, let's say. So if I run it over local, I use Litov's, and I use local host executor. So it now run it here over 100 threads in my machine. And it will take time. So also here you see how easy I can take the code and scale it on my machine. In this example, my machine is my laptop, but I can also run it over some VM, which has much more resources. But here it will obviously be slow, because you run 100 threads. So it takes time. It doesn't run it on my laptop. I will make it a little bit faster. For you, you will see it will be completed at some point. Good. It's slow. Now I want to take the same code, exactly the same code, I'm using exactly the same notebook. And I want to deploy it against a serverless platform that exposes Apache OpenVisk API. Now I can either have it somewhere installed in my organization, or I can use, in this example, I'm using the IBM Cloud Functions that exposes Apache OpenVisk API. So it's the same API. And I want to run this code and deploy it there. So the only change I need to my code is to write here back into IBM Cloud Functions. Now, there is a hidden configuration file that you don't see here that contains API keys to access my account. But that's it. But from the code point of view, there's no any change, right? IBM Cloud Functions. And I want to run it against that backend. Now, again, as I said, it use function executor of LitOps, and I deploy my code. And from that moment, this code is serialized and deployed against 100 parallel invocations in IBM Cloud Functions, only 100 tasks that will start running right now. And they progress, I mean, they will be finished faster. And that's it. Now I get my results back on my laptop from the execution in the Cloud Functions. And now I have the number pi. Now I want to take the same code, and I want to run it against Kubernetes API. Again, very heavy Kubernetes API, either in my organization. In this example, I use a BM code engine that exposes Kubernetes API with the job descriptors and all that stuff. And again, the only change you have to do is to change backend to code engine. And then my code executed there. And that's how it works. So this is the first video. And again, it will run and set up all the whatever it needed there. And I will get my results back. So this is how LitOps work, right? Now let's get back to my presentation. And this is the link to the video on YouTube that you can see later. So this is the user experience as I showed you before. A user doesn't see anything else. And there are no hidden tricks. All this the LitOps framework does in the backend by itself in the code we developed in the LitOps. And the user doesn't see all this complexity, how to take his code and deploy it against OpenVisk API and deploy it against Kubernetes API and maybe run it local. And those are backends and the storage we support. So you see we have a long list of platforms, both hosted, both open source. And as I said, this list is growing. And now more on LitOps. So it's a truly serverless framework. It's scaled from zero to many. I don't need to keep any cluster in my cloud. I don't need to keep anything ready in the Kubernetes OpenShift. I just take my LitOps and deploy the job and then it will start to deploy it. So there is absolutely nothing that should wait for me in those backends. And it's a lightweight. And so it can deploy basically with a single LitOps API. I can use any compute backend. It's data driven. The framework is data driven by sense that it has many components from the big data processing tools, algorithms, maybe they know how to partition large data sets and how to process large data sets, how to chunk them, how to chunk CSV files without breaking in the middle of the line, how to chunk other files and how to process data from object storage. And this is very important because this enables you to process actually big data and not just because in the example of Monte Carlo, you saw more compute. I mean, there was no data in object storage, but we will see later how we can use LitOps also to process massive data sets in object storage. So as I said, this is the data driven flows with LitOps. So LitOps, it's a, when you deploy your Docker image, LitOps framework is deployed as well and seen that together with that container and LitOps know this LitOps runtime that will be deployed in the serverless backend together with the user codes, it actually, this is what gives the benefits to hide all the complexity from the user. So you need to decide the right scale, how to handle accesses to data sets, how to partition them, maybe use cache, a coordinate parallel invocations, chunk them, group them and so on, right results back to some object storage, maybe update invocations or give them back to user and all this completely transparent to the user because you just get it from the LitOps. And so what it's good for this, this framework, right? It's obviously we don't want to deploy Hello World, we want to do some more interesting stuff with it. So it's good for all kinds of data preprocessing and data preprocessing is an important when you use machine learning, deep learning, kind of AI frameworks because the raw data sets usually need to be preprocessed in some way. And preprocessing, it's very important. If you manage to preprocess them efficiently, cost effective, and fast, then you'll defeat the results from the preprocessing to your machine learning frameworks and then all this ecosystem and here very good. And you can use it obviously for batch processing on kind of Monte Carlo simulations and computer driven workloads. And basically, we also support MapReduce, but the MapReduce is limited because we don't yet support Shuffle internally in LitOps. And so the MapReduce is you have many maps and one Reduce. It's not, it's still good because it's covered on kind of use cases. And this is where we stopped. We didn't want to make it truly MapReduce framework because we didn't see any benefit for this. And it's good for those embarrassingly parallel workloads. It's basically a problem that you can take and you can chunk it in different tasks. Each task will execute some, it's part of the execution and then those kind of workloads is good for LitOps. We can also exchange information between tasks, but you don't see it here and can show it later. So let's see some demos in use cases. We're about 25 minutes after the talk and I think it's good to start here. So let's see data processing. That's what I said before, but now I will show you also some video and demo and what exactly we're doing. So as I said, majority of machine learning, deep learning frameworks need data pre-processed before you can do something with them. If you want to run all kind of deep learning over images, in many cases, you need to extract, for example, colors, right? Just one example. So if you extract colors from the image, image might be megabytes and colors. It's just an array that you consume kilobytes. So you extract them and it's very nice if you have 10 images, 100 images, you can just run it sequentially and maybe download them to your laptop and run it. But if you have millions of images and you want to extract colors, this became a challenge, how you do it and where you keep those colors, then how you deploy them back to your machine learning framework. So there are all kinds of aspects here. Face alignment is another example. It's not only about, it's not related face recognition, but you sometimes need to align faces in images and remove the noise. So you see here image, it's a picture of a face and then you have some noise in the background. You apply face alignment with images, you get this image. Again, if you use it one image, then it's good, but if you have thousands, 10 thousands of images, 100 thousands of images, then it's again the same challenge. Now face alignment in particular, we did some experiments with it. So this is your business logic on the left. And now you want to have like, we used 1000 images in object storage. And now this is the boilerplate code that you need to run to list those images and chunk them and apply your code on those images and get results and store them back maybe. So it's about 100 lines of boilerplate codes that you don't really need. Now if you have, and they also need to be familiar with the object storage API, because if it's S3 API, then you need to write here something that works with S3 API. It's an OpenStack Swift. You need to use some other tool to access OpenStack Swift. And so you need to know this and you need to know how to write your code. And if it's more than 1000 images, you need to chunk them because a single response will not return you all those images. So that's complicated. If you take lead tabs, you'll just take the same business logic and deploy it with three lines of boilerplate code. Because you just tell lead tabs, oh, this is my code. And this is the data sets in object storage. Then just write it. Then just run it. And then in experiments we did, like 35 seconds compared to sort of six minutes, then you run it over your laptop, I don't know, 1000 images. But if it's much more than that, then it's obviously will be, you can't run it over your laptop anymore. And so the complexity will be. And the demo here I want to show you is exactly this coloring identification. Now the good part in this demo, this is what I like. So I didn't write the business logic code. I found very nice blog by Karan Banot who just wrote color identification in images. And he demonstrated very nice blog color. Here you can take image extract colors and then how you can retrieve all kinds of images based on the colors that you have inside. But now I show you how I can take this existing code without modifying it and execute it at massive scale against the serverless platform. And in this example I demonstrate you when you need to use backend based on the Kubernetes API. And the example here we have images storage in object storage. And now user want to retrieve all the images that contain specific color. Like give me please all the images from object storage that contains color blue. And so this is the example original blog just showed you how you can do it but without scale. And again, this is the video. And I'm going back to my VLC and I'm going to show you the video of a color extraction. So this is the code that I said from the blog. And it's a regular Python. Again, it's a Python notebook. And you see here I have some local image on my laptop. And I see that I can read it. I can display it. Now you write my business logic code color identification to see if the color, particular color in that image. And now I see that I can first of all extract colors from the image. And again, image on my laptop. And I see oh, I managed to extract colors from that image that you just saw before. Now I want to test that my code can extract color from a single image in return. So this example I want to see if this image has color green, I think, but you don't see it here. So I tested my code on a single image. Yeah, blue, sorry. So this image test, if this image contains color blue, if yes, it will turn me through, otherwise not. And I tested it on my laptop. And it works very fine. Now I have object storage. And here I use in my cloud. And this is my bucket. And this is images they have there. As I said before, I'm using Kubernetes API, but I don't have it on my laptop. I'm using IBM code engine, IBM cloud engine that exposes API. And I want to tell, I want to give me all the images from this location that has colors green. And take the code that we had before in the notebook. Let me get here. Yeah. And now from that moment, LitOps applies a big data partitioner that access this data set in the object storage. It expects what is the images there, what are the sizes. And it deploy your code against those images and you get your results back. So in this example, I used 50 images. And you see that it took seconds to deploy the code and get all the images back. Let me stop here for a second. And this is the power of LitOps. I mean, it take your code, it deploy it. You don't need to do anything with objects or a API. You don't even aware of it. In this example, it was 50 invocations because we have 50 images. If there are more images, we have more images. And there are millions of images. LitOps don't know how to chunk them. So you will not have millions of invocations. And now I want to rerun my codes again. And this time, I want to give me all the images that now have a blue color. You see it completed in 27 seconds. This execution against the clouds from your laptop to the cloud. And now I want to draft all the images that has blue color. And again, it's exactly the same. I just change argument here and the code is deployed, executed, and I get my results back. And again, I get all the images that has blue color. Now, obviously, you can take this example and make it more like the colors that we get. You can inject them into a machine learning framework, as I said before. So there are a lot of freedom you can have here. And this is exactly what happens behind the scenes. So LitOps inspects this input data set in object storage. It generates DAG, direct security graph execution. In this example, it's mapped a single task to process a single image. It serializes user codes, serializes execution DAG and all kinds of other metadata, uploads them all to the object storage. In this example, we can use other storage, of course. Then LitOps generates config map, job definition, and code engine. And based on the provided Docker image, if there is any, if not to use it default, it deployed these workloads against that serverless backend. And LitOps mapped a single task to a single point in this graph of execution. So each task, when it starts, it's running in the serverless. It will know exactly what kind of data it needs to process, where it needs to report the results and what it needs to do. And once tasks completed, they just write their statuses back and results back to the object storage. And in this example. And then LitOps just reads all results back from the shared storage, from the object storage, and you get them back. Now the user experience here is the key. I mean, so with LitOps, you have user experience on the right. You don't see anything. Without LitOps, you would have a very interesting user experience. You need to write all kinds of deployment definitions, dog descriptions, user application, how to deploy, how to pack it, how to run it, how to scale it, how to list objects. And then you work very hard. And then one day I want to run it against Apache Open with KPI, and then you'll start to work again hard because you need to learn that API, but with LitOps, you don't see. This is an interesting, another interesting example that I want to show you is special Metabolomics. It's a real example. It's a different project that you work with them. It's also an open source project that they use LitOps. So the idea here is that, and I'm not the expert in these fields, but it's very exciting. And so those guys know how to take all kinds of, understand all kinds of anomalies in cells. And when they do it, they can figure out if they, some medical image contains some anomalies like cancer or some other issues. Now, but that's from them, from their Metabolomics point of view from, but if you make it to computer science, it's a classical big data problem. Why it's a big data problem? Because their process generates a lot of data, a lot. And every pixel in the image, they generate all kinds of details from there. And then they scale those molecules from those images. So you have huge number of data that's at some point explode from original medical image, and then they need to process it. Now, the biggest challenge here, and this is how it works for them. Now, it's the, your, your, it's the project MetaSpace and you will hear the, there is a link to, to their project. Now, so, so you start with some medical data sets that user uploads, like one gigabyte, one terabyte, then user choose some kind of molecular databases. And then the MetaSpace algorithms and all kinds of their proprietary logic starts to run and analyze this medical data set. Eventually at the end, when this process finished, there will be generated small images with regions from this huge image. And those images contain exactly where the anomalies were detected. Now, why this is very interesting from the big data point of view, because only in runtime, when you already started to run your workload, only then you need how much compute resources you need. So if you start, if you take this approach and run it over cluster, you just create some cluster of machines, you run it over this cluster. Only in runtime, you will know if this cluster was too small or too big, because it's not enough to see if it's one gigabyte and it's so many machines, if it's one terabyte and it's another many machines, it's not that it's the data that's being generated as part of their runtime. Now, with LitOps address this big data challenge, because LitOps doesn't need to do anything in advance. If this part generates a request for more invocations, LitOps deploy more invocations. If it needs less invocations, you have less invocations, and then you pay exactly for what you need. And this is an example. I'm going to show you this example again in YouTube, because it doesn't run here, but in PowerPoint. Now, I decided to keep this example as is. And what I mean as is, I mean, it will run about 10 minutes. So there are no kind of tricks here to make it faster, but 10 minutes, it's already very good, because it's process data sets and deploy a lot of workloads here. And again, from a user point of view, he's not aware of this complexity, how to move to the cloud, how to run it's there. You only focus on the business logic. And the business logic here is that algorithms from an NBL that does this spatial metabolomics detection, and here all the data is storing object storage, public object storage. And now you start to run those workloads. And I'm going, you will not see much here. You will see different steps here, that LitOps just execute them, Y by 1. And if you see here, different steps, LitOps decides on different number of resources it needs. And if you go here, here's the most interesting part. If you go at the end, you will see here the summary, what actually happened here in the background, once they complete the job. And here I'm going to stop. So there were about 16 steps that were deployed. Oh, yeah. So there were about 16 steps that LitOps used for them, for this workload. And this step used 256 invocations, this step used 32 invocations, this step used 315 invocations, two gigabytes memory, and so on. So every step LitOps will generate as much invocations as my memory needed. And then you only pay for what you actually need. Now, if you use this notebook and to use bigger datasets, then you will see here thousands of invocations and perhaps make more memory. So all this dynamic happens. And I like it very much. As I said, it's an open source project by those guys. And you will see the link to them after the talk. And now at the end of this process, you will see those images that are generated and printed. And these are like those images that, and by the way, this notebook on the website, it's self-defined. You can just get there to the meta space and take this notebook and other notebooks and just run it by yourself. And it's very cool. Good. Another example that I can show you, but we already saw it's sort of Monte Carlo simulations. Now Monte Carlo simulations, again, it's not data-driven as here in this example is data-driven. You have datasets in object storage and it starts with it. Monte Carlo usually has less data in this example. And this example is how you can do stock prediction with LitOps. And again, if you will see it, it will be very similar to what you saw before. And you have a business code, you have your business logic. Now you want to run it. And here you want to run many simulations. I mean, so Monte Carlo, these kind of simulations, you need to run many, many predictions to get some results. Again, it's not about stock prediction. It's about the way how you can take Monte Carlo simulations around with LitOps. I never saw someone who got rich because he ran this notebook, but it's a very good example of how you can get many computers. And again, you can run the simulations with a 500 forecast and you want to run five invocations and you want to run very small calculations. And now let's run it over my local machine. So I took here a very small number of simulations I want to run because I just want to demonstrate it to you. So if I run it over my local machine, I use five threads. And you will see here that I'm going to get now results. Again, it takes times, even five invocations because it's a bit compute-intensive. So in five invocations, it takes time to run it over your local laptop. And here are very small forecast number of projects. So you will see the results are very, very unclear. It takes, if it brings them, you will see that it's very unclear because it's not enough to run 500 forecasts. But now let me just do the trick I can get and I want to run much more invocation. I want to run here about 150 concurrent invocations. It will be much more simulations. And now I can take a project much more forecast. And again, in this example, I'm using Kubernetes CPI in the backend. And now I'm going to use this exact code and deploy it against CodeEngine and VMware CodeEngine. And it's deployed there. This time it's run much more invocations. And I will get very soon my results. And yeah, as I said, Kubernetes CPI. And so there is 151 invocations in the cloud and they're compute-intensive. And the results will be, let's take them. Yes, start getting. And we will see how much time it takes, I think, takes somewhere between one and two minutes to deploy it. You obviously can take, if you have enough resources in your account, you can obviously deploy it. Not 150 invocations, you can deploy also 1000 invocations if you want, or much more. And you will get more accurate results. Yeah. So we see it's completed in 100s, three seconds. That's simulation. And if you see now Instagram, you see it's much more correct values. And this is where the power of this came in. You can deploy your code to this massive scale. I will stop here. I have another example, but I think I made the point. And I'm going back to the summary. And so we saw kind of examples. We saw that, at least I hope I managed to explain, there are some challenges, how you move from your business logic, from your existing application to serverless. But on the other hand, serverless, it's a great attractive platform. If you run it on your hybrid, in your organization, public clouds, you make them together. It's a perfect platform to run all kinds of workloads. So Lithops is a framework that's designed to fill this gap, how you get to the cloud, how to push to the cloud experience. So demos use cases. If you go to the project page of the Lithops, you will see many presentations, you will see many resource papers that be published, all kinds of other demos. And it's an active project, you're welcome, of course, to contribute or comment or any other issues you want to ask. It doesn't need to stop after this talk, just go to the project and you can always find me there. And we have like 15 minutes for the questions. So I'm, let's see. Awesome. Thank you so much, Gil. And thank you for all of the tutorials. It was really cool to see that. It doesn't look like we have any questions right now, but I know you just ended your presentation. So folks, if you have a question for Gil while he's here, feel free to submit it through the chat. We'll just give a few seconds here or you can use the Q&A tab also in the platform. Just give folks a few seconds here and see if anyone has a question. Okay. Well, I guess that means that you solved it all, Gil. Perfect presentation today. Just a quick note, folks. Gil's email here is on the screen. It'll also be in the slide. So if you do have questions, we encourage you to reach out to him. But yeah, that's going to do it for us today. Thank you all for attending. We look forward to seeing you at a future CNCF webinar. Have a great day and stay safe, everyone. Bye. Bye.