 Good morning. Good afternoon. Good evening. Welcome to a special edition of the data services office hour I am Chris short host with the most of red hat live streaming I'm joined by one of my favorite red hatters and an Intel person. Yay Thank you Ryan and Audrey for coming on today. Audrey. You want to like tee up what we're talking about today a little bit Yeah, for sure. So, um We're please introduce you too. I'm sorry. Yeah rounds of introductions. It's early I forget stuff too. So, you know, it's fine It is early on the on the West Coast here 6 a.m. Yeah, oh coffee. I need that. Yes. Sorry. I'll send you I got it. No, so today we're looking at how we can tackle AI ML workloads using red hat open shift the platform with Open Vina, which is a fabulous Intel product So Ryan and I had actually spoken about this in a session earlier this week And I thought well, we need to spread the love we need to tell everybody about open vinyl and red hat open shot Awesome sounds great Now would you like me to go ahead and speak some more about this? Let's let Ryan introduce himself. How about that? Okay? Well, so yeah, I'm I'm Ryan Lonnie. I'm a product manager at Intel for the open Vino toolkit, which is a set of libraries and tools that are used for Optimizing deep learning inference and we've done a lot of work to integrate with red hat so that it's easier to deploy and Optimize quantize these AI models so that they can be deployed in open shift and so they can be deployed on red hat enterprise Linux and We're gonna talk a little bit about it today Awesome. Yeah, so one of the things that we found that is Pretty prevalent right now in industry since we have kind of a need for an open hybrid cloud and AI platform is that around 69% of the Enterprises today use a mix of open source and cloud base Software to power their AI initiatives, which is fantastic So again, that's really really the reason that Ryan and myself are here today Because we want to talk about these two products that red hat and Intel have created so Cue the intro for red hat open shift data science and cue the intro for Intel open Vino and I did want to mention that you know since we have so many enterprises using The open source and cloud-based software to power their and AI initiatives One thing that's also very interesting is despite the large number of vendors that are out there in cloud platforms architectures to choose from most of the technology leaders are Using open source tools, which is yay fabulous. Yeah, and It's um, you know 78% of them are their initiatives that they're That they've actually gone ahead and created are deployed using help hybrid cloud infrastructure So it makes it a fabulous opportunity for us to get our products out there. That's awesome. So cool and The the partnership with Intel how did that even start right like where how did that get going I guess? Wow, that's a that's a good one Ryan go for it For me, I think last year sometime last year we started to to communicate more directly at least my team that's building open Vino and we I remember it must have been in January of this year one of the Solutions architects Kyle Bader He and I were on a call and I you know I'd never met anyone at red hat before I'd talked to the the team inside Intel that worked with red hat and it was sort of like Game of telephone sometimes and you're okay red hat is doing this they're gonna launch a new data science program and then I actually got to talk to a solutions architect and he really Explained it so well and said this is what we're trying to do It's gonna be you know leveraging open source tools under the hood Cube flow open data hub and I had never heard of open data hub before and after he told me all this and got me Really excited and I said, oh, this is a great idea. You should definitely do it You know if there's a lot of cloud providers that have these sort of vendor lock-in tools that you can use on their service But I haven't seen this like managed ML ops cube flow type open data hub Offered by anyone else and I think it'd be great if red hat did it and he said great We need you to do something too So we we see that you have a helm chart and some Kubernetes integration that you have with open Vino Can you turn that into an operator for open shift get it certified? Certify some of your containers. We need them to run on UBI and at that point we had not done much work with We had just started to enable and validate our software on on red hat enterprise Linux We had previously supported sent to us and you know had customers working with other flavors of Linux So it wasn't too hard to move over but that's really when it kicked into high gears when we started Taking that helm chart turning it into an operator and turning it into an open shift operator going through the certification And the partnership has gotten, you know stronger since then That's awesome, right and that's like most partnerships at red hat, right? That's like what's working a couple bits here working a couple bits there and usually it turns out awesome So it's great to hear so do we want to talk about Open shift and open Vino together. I like how they both stay open like capital letter open in the names. This is pretty cool So, yeah, I can start on that What we have with Open Vino is it is one of our managed Services that's available on roads and what's really cool about it is that when you go ahead and choose it in our red hat open shift data science platform and spin up a Jupiter notebook it actually comes prepared with all these wonderful tutorials and For one, I love that because I get to go through the tutorials to see how it would work with our red hat Open shift data science product And I can go into detail in terms of the inferencing and the quantization But I might let Ryan speak in in a bit to describe more Maybe about the process that they went through to get open Vino Actually ready for red hat open shift data science Yeah, would it help if I don't want to dive right into a demo, but should I share my screen and Show the open shift console and yeah, let's let's let's go do that. That would be fantastic cool, so I'm Just diving right into our roads cluster and let me know if you can see Yes, you could increase the font size just like two ticks maybe All right looks I can read it. So folks if you can't let me know and We'll always make it bigger can always make it bigger. Yeah, so please continue. Thank you Cool. So I just went over to the installed operators on this Managed cluster that we have that red hat is managing for us and we have the Pre-release version. I guess I would say of open shift data science installed We have these two operators from intel one is the AI analytics toolkit and this includes some of the common frameworks created by third parties like tensorflow, pytorch And we sort of At bake in our intel optimization so you can keep using tensorflow keep using pytorch modin pandas scikit learn with some of the low-level mathkit library and optimizations from one api Baked in so but I'm going to focus on open vino toolkit, which is our intel native tool It's an open source set of tools like I said before and run times for deep learning inference And when we did the integration We were thinking about open shift data science and of course partnering very close with folks like audrey And we added two component or two apis into the operator One is our existing model server, which is for serving models It's a service oriented architecture. So you can deploy It deploys a ubi container Manages it with the operator and it creates a endpoint for prediction requests for your Image classification object detection nlp models and the other is a notebook instance So all you have to do is click this create notebook button And if you're planning to do development, this will Install third party dependencies that you need in your jupiter environment So you can run open vino run tools like tensorflow PyTorch onyx And start optimizing the models that you have trained and we even have some Training examples that you can run that also don't require a gpu So you can just quickly get started doing a training Run it on a xeon cpu and and and start doing some optimization So when you when you click create notebook, what this does is it It it pre-installs a bunch of jupiter notebooks you can see here That are available through This jupiter console and in order to get to this You know, we'll go to the if i'm using open data hub There's a different link i'm using red red hat open shift data science Just part of the managed services. So we'll go to red hat open shift data science And then you can see that these Cards have been enabled here. So You open vino one api And if I launched jupiter hub, which i've already done There'll be an option to just click on open vino and and you can jump right into the notebooks So I selected the the open vino option Launch them and these are actually two Jupiter notebooks that audrey and I were sharing the other day at an intel conference And these these are showing how to take a model trained in tensorflow This is sort of like the hello world example that is provided by tensorflow And it's the flowers training. So what you're doing is taking Let me see if I can make this larger And i'll hide this table of contents And put it in presenter mode Is that a little bit easier to read? Yeah, that's awesome. Thank you. Okay So this is sort of the classic Tutorial that tensorflow provides and you know open source. It's a image classification model It's what we would call like a toy model because you're not Really going to just detect six Or classify six types of flowers for many production use cases, but Well, I'll I'll dive in and show you what we're doing. So there's these are the This is the data set that you uh that gets downloaded automatically when you launch the jupiter notebook And it's going to pull down 3700 photos of flowers. So there's only five subdirectories which For the different classes. So these are the the labels that we'll have for When we did classify the images and it's just five different types of flowers And feel free to stop me at any time and chris if you'd like You're doing fabulous keep on going. Yeah push forward like this. This is cool Cool. Yeah, so we take the Do like a visual inspection is to make sure that our labels are correct And of course if you're inputting your own data, this is something you definitely want to do and make sure that The tulips or tulips in the roses or roses, but um, and then this is just walking through some of the link boiler plate steps that you would have uh from Preparing a tensorflow image classification training using keras, which is part of tensorflow And you will sometimes see these sort of funny harmless errors And often it's either due to you're not using a CUDA or you're not using some other library that it expects you to use But these are harmless. Um, so don't don't be alarmed by the red and then Let's see if we again these are the each label and preparing the dataset Some other model preparation And then we're going to actually run the training on the cpu. We'll go down to that part. So you can see 15 epochs of of training and in this, uh I don't know how many I think maybe 14 cores are available to my pod right now So with this 14 core pod with maybe I don't know how many gigabytes of memory not too much It's for able to process each epoch in 152 milliseconds, which Is for a cpu. I think that's it's not too bad. So you have to wait. Yeah, wait a few minutes for this to to run But again, it's a very simple model And you can check out and inside the notebook and see the Accuracies actually pretty good. It's, you know, close to 80 percent Just after running those 15 epochs with those 3700 flowers And then this is where we start to hand off between TensorFlow and moving to open vino. So first we're going to download a picture of a sunflower And make sure that it's it's classified and see that it that it works. And so we Get this little message down here that says it belongs to sunflowers. We've got 94.86 confidence Now we're going to save the tensorflow model and this is in the saved model format So often before you take a model to production You'll save the fully trained model and then use a tool, you know, we have open vino Some of our competitors have other tools. There's like core ml tensor rt Onyx runtime And you're you'll usually have a step where you convert that to a different format before you Deploy it in production. So in this case we're we're open vino. So we're going to convert it to the open vino representation There's a few parameters that can be defined So the shape of the model which we know from the previous steps Is 1 by 180 180 by 3. This is The size of the images the input images were quite small And then three channel rgb And so the the output type we're going to go from Floating point 32 precision to fp16 when we convert to open vino So we don't we reduce the size of the model. It's still floating point Precision and we should see no change in accuracy no change in the Output of the model, but the performance should be should be slightly better And that's after we do run this conversion step The next thing is we'll see that it's successful The next step that we're going to do is to actually use open venos inference engine to do the inference So again, we're going to set these five classes We're going to load the open vino optimized version of the model which you'll actually be able to see Let me go back to the this is the directory that has these two notebooks um, and we're in the the It's quite long, but it's like it's a pretty fun one getting to see the flowers and it's easy to visualize like, okay What five flowers you got to figure out which one using ai? It's like it's pretty easy to explain what's going on Awesome. We're back live folks. Sorry about that. Uh zoom just decided to disconnect us from the call. So, you know Yeah, technical, so I'll I'll go through like the last few things that we covered So we we did a prediction on the the tensorflow model and here I'll link this Screen a little better. So we save the tensorflow model converted it to the open vino format and moving to floating point 16 So reducing the model size a little bit better performance, but no change in the There's no Noticeable change in the accuracy in the output of the model And then we're going to run a test which you scroll through all of this See that the model converts successfully we're going to Load the ir so the intermediate representation is what we we call the Open vino format and when you run that conversion step, you'll see that there's a dot bin and a dot xml this is what the Output of the open vino format. So there's a Binary and then there's this xml that has a bunch of metadata for the the model Stored in it and that's what we're going to to use to run the inference in open venos inference engine so Here we load the network read it with inference engine and then we're going to download a picture Of a dandelion from wikipedia Just so we have an image that didn't come from the original training set and Because we don't want to Plug in an image that was used during the training. We want to have a totally random one So we download this dandelion and we see that It's able to detect or classify it as a dandelion and with great confidence And then that brings us to the next step so The next step is to do quantization and so now that we've trained the model It's in open vino format. We see that the it's able to classify the image correctly The next step is to further optimize. So we're going to actually lower the precision to eight bits Right now. I said it was in floating point 16 We're going to do a process called quantization And we have a tool that open vino provides That comes baked into the integration with red hat open shift data science. It's called the post training optimization tool And this tutorial and there's a few others That are currently in the the notebook Image and even more that are getting added, you know, I think we have three more coming So it's a lot of examples for you to start with for different use cases. And of course, this is image classification So first we're going to just Use the model that we trained in the previous Notebook and if you didn't run the previous notebook This notebook actually checks And downloads and runs the previous notebook if you happen to forget to run the one that does the training and then there's some sort of These are steps that are required to run quantization. So with our tool You have to create a data loader and there's an accuracy metric To help measure the accuracy to compare the Original model with the quantized model We also have an accuracy aware algorithm that you can use to To set the maximum amount of accuracy job So that's some of the reasons for the accuracy metric And then this is the actual pipeline to use the post training optimization tool Which performs the post training quantization And so this running these steps here, this will give us a Compressed model that's in the lower precision. So integer eight precision And then it'll save it in the directory. So, you know, I was looking before In the model directory. So now we have this optimized and these are the low precision models that get saved by running this Pipeline Once you have this bin and this xml or if you're using a dot on and x and onyx model You can take these and deploy them with the model server and create a endpoint in openshift So you can serve the models and I'll I'll talk about that in a minute But once you've done the experimentation the work the quantization optimization in the notebooks and roads and openshift data science You can then download these art at these model artifacts And serve them in openshift But so let me I'll finish the rest of the notebook before I show what that looks like One of the most important things that people Are always afraid of when using quantization is you are lowering the precision and One of the key problems that you know, our customers face is The the accuracy drops so much that it's not worth the performance gain So one thing we always want to check is that the original model and the quantized model Are roughly the same and in this case they're they're actually almost exactly the same And there will be cases with more challenging workloads or challenging data sets where you're not able to get This type of accuracy so you always want to check and Part of the process that we were running before was actually taking some of the images From the original training using them in the quantization. So pipeline So we can fine tune This quantized model so that we don't have accuracy drop. And so that's really important step is to check the accuracy And last we're going to Basically do the same thing we did before we're going to run inference on the quantized model and download that same picture of a dandelion And see what happens so again, we get the image most likely belongs to dandelion about the same confidence as before On that that one image And then this is the fun part so Running the running the benchmarks So we one of the first things we Well first the benchmark app. This is one of the other tools that we provide an open veno That once you have an open veno model you can easily just use our this command line tool Or use it through the the jupiter interface To to see what the throughput and the latency are for the model that you're using on the hardware that you have So in this case, you know, we're running this on aws. We have a platinum 8259 like a A new pretty good zion. Um, we don't have access to the full thing because You know, if i'm using roads and i'm a data scientist and if everybody requests a pod that's got all the cores It's gonna It's gonna be an expensive aws bill So the this is I think with 14 cores We can see the original model that it's late of throughput is around 28 91 frames per second um, and then if we look at the quantized model And the integer 8 low precision model we're getting about a thousand frames more per second and so this This really matters when you're dealing with like video processing as one example If you're processing 30 frames per second or 60 frames per second Getting an extra thousand frames without having to buy additional hardware And just not having any drop in accuracy if I go back up here I mean you're getting a thousand frames more And there's no change here. So that's It's that's part of the the value that we add with open vino on our tools And you know giving you the ability to see the performance in the benchmarks right in the jupyter notebook These are not perfectly accurate because it's there's some overhead with running jupyter There's an overhead with python, but it gives you a rough idea of the increase that you can see By running this quantization step And then there's also some features that are not going to be available to test when you're running on roads in the cloud on that open shift data science One is that we have intel GPUs, which right now are just integrated graphics So like my laptop, you know, I I could run this same notebook on my laptop with fedora and it actually is able to Access the integrated gpu or if I had red hat enterprise linux on a desktop or laptop Or on a nook edge device, you know, you name it We'd be able to tap into the integrated gpu and Soon when we launch the discrete graphics cards from intel, you'll be able to use those as well and what this does is it's It's combining both devices so I can either run it just on the gpu just on the cpu Or if I do this multi cpu gpu, it'll maximize the throughput by using both But of course On aws. We don't have integrated gpu's. It's just a xeon scalable processors. So This part you'd have to run locally Okay That's cool. So we have a question in chat here from friend of the channel carlos from ibm What's the support model for this operator for enterprises wanting to use it? Can they pay intel for support patching that kind of thing or how's that relationship works? Yeah, so we have both a community version that's Provided as is without any sla enterprise support And then we also have the marketplace offering so that If you want to use open bino on the open shift data science platform You do need to install the marketplace version And there is a trial and so we provide the 24 7 sla we have Customers support with you know technical consulting engineers that that provide support to customers who use the marketplace operator And I believe that Yeah, so if you when you install the operator you will need to install the marketplace version on open shift Uh data science If you want to use open data hub di y do it yourself without any support Technical consulting services Then you'll you can use the community version that's on the red hat ecosystem catalog Awesome So carlos, let me know if that doesn't answer your question by pretty sure that's a comprehensive answer there um So yeah the the benchmarking um Anything out of the norm here that you're looking at or are you is this Expected kind of latency kind of thing Any bumps in the night you might have noticed using a cloud provider versus, you know local hardware kind of thing Yeah, and I think that the key uh, we don't show a performance benchmark Uh with tensor flow because there's it's not really easy to do an apples to apples comparison, but sure I mean, uh, if I did that In almost any case open vino is either slightly or Significantly faster depending on you know, we we do additional optimizations that uh, like operations fusing and uh pruning of graphs and things that happen By default when you run open vino And so that you should always expect the performance to be better than just using the framework And then when it comes to the hardware I think that you know getting over a thousand frames per second for any workload on a cpu Is That's an accomplishment in itself. I think that uh oftentimes There we have partners customers who decide to buy expensive discrete graphics cards Because they don't think they can get this kind of performance just using their cpu And so that's you know, there are cases where you need to get 50 000 frames per second or you know, you really want to spend that extra money Because you're processing a thousand camera streams on one server or something like that But when it comes to using the cpu, it's sort of just a different equation, right? If you have You know 88 cores and you know how many frames per second you can process That determines how many camera streams that pod can can process So it you'll have to do them the back of the napkin math and say, okay I can go buy a Discrete card and it'll cost me this much per month on aws Or I can just buy one with xeon and I can process this many frames And then you know decide what's best but with cpu the latency here is actually I think this is great But it really depends on the workload so And like all thanks gubernetti's this workload right like how does your workload perform? What does it want? Right like if you have a pod with 100 cores available to it that might be enough to do your job Or whatever, you know, you're trying to process there, but sometimes that's a great point. Yeah, exactly That's a great point that Yeah, the math is a little different but Sometimes that big expensive gpu as we're using one right now to produce the show Is is worth it and sometimes it's not right so yeah, you can save a little money here just by bulking up the cpu it looks like Yeah, and we we definitely For especially for training. I mean, I know I showed you this really simple model. That's very easy to train you can train this using a GPU if you have a gpu instance training with an nvidia gpu it'll go much faster and there's another example that um Audrey and I are looking at using for another event that this shows how to do a quantization aware training with pytorch and this actually This runs really fast if you use a nvidia gpu and then you can of course deploy the inference on intel hardware, but Um, you can also train it on a cpu. It just takes longer Nice cool That's awesome I think I'll just um mention here that some people may ask You know what industries, you know, maybe benefiting from some of this technology and because I just uh did a Conference session yesterday for iapg which is the international association of petroleum geologists um One of the things that comes to mind already and that I know that intel is heavily involved with is Looking at seismic. So if you can think of Seismic is just basically pictures of the the subsurface but in a way, it's almost Like an x-ray you you are having A lot of images that you need to process for certain details certain horizons or Seeing if you can pick out faults or seeing if you can pick out horizons and this is where having uh something like open vino for quantization and for For inferencing really makes a big difference because those uh industries or I should say those energy companies are using You know state-of-the-art GPUs in their high performance computing centers And they're trying to squeeze every last bit of performance out to see what they can see in their subsurface models And again, I know that um intel has like an open seismic architecture their alpha release um That's currently a sandbox environment for developers in oil and gas Where they can perform deep learning on 3d and 2d seismic and For me, that's something that I find fascinating, but don't discount anybody who's doing something in the medical industry like research for for cancers looking at images You know the open vino has a lot to offer for a lot of different enterprises and and industries And I found a link analyzing 3d seismic data using intel distribution of open vino tool bit I'm just going to drop that in chat so folks can uh Take a look at that to get a better understanding of that kind of use case Yeah, I said it's um, this is really exciting uh for better You know, yeah a better lock of word is it's good stuff that that we're really doing together and Uh, I get very excited about this Yeah, I know I just I would oh I'd say that I don't know as much about the the seismic analysis But you know, we just I did a session yesterday with one of our health care partners and Similar to the seismic analysis. They have Very large input Images or input data that goes into the model So like in the case of x-ray, uh, which we were talking about yesterday. They have like a 1024 by 1024 Uh grayscale image that gets input They also, you know, they process also like 3d slices of 3d images for ct scans And they're really measuring not in the frames per second, but they're saying Can we process one frame in less than a second? So it's like a totally different, you know, paradigm shift. Yeah But that every little bit of optimization counts when you're dealing with big input like that So with the x-ray, you know, they were saying, oh, we were able to go from two seconds With tensor flow to process one x-ray down to 800 milliseconds with open veno four years ago And then today we're back. We're down to 147 milliseconds. So it's like we keep inching down and going lower And that was running on an intel core processor. So not even a zeon Okay, because they they have to plug it into a ultrasound machine and have like, uh, you know embedded Machine that's next to the in the operating room or at the hospital So there's a lot that we can do to squeeze more performance out of the Sort of more low power hardware. That's fascinating And just the idea of having that like box next to the, you know, the x-ray machine the mri machine Whatever it may be like that's incredibly powerful in the hands of the right people, right like Well, yeah, the cost makes a huge difference for yeah, like they were saying around the world, you know, we're lucky in the u.s That's like one in 10,000 One radiologist for every 10,000 people, but in other parts of the world It's like one for every quarter million people. Wow. And so Bringing these devices to places that don't have access to radiologists and i'm sure with the the Energy there's similar challenges of bringing the equipment And doing it on site in places where there's no internet connectivity You can't connect to aws when you're, you know, out on a rig or if you're out on an island somewhere If you're out somewhere in a hospital that doesn't have, you know, the access to Fiber you just have to run everything right there and do it on a device that is affordable Yeah, and i'll just Put in a plug for the energy industry This also helps in the areas where we have data gravity when we're working with national oil companies That don't allow us to take the data out of the country So what do you do if you can't use, you know, a public cloud provider? You have to create some sort of on-prem or hybrid cloud And that's where, you know, do you build your own high performance computing center? Which could it be cost prohibitive? Wow, do you use do you use some of this fancy stuff from intel? You know, what are you going to do? We help you with the intel stuff And of course you're going to use the intel stuff with red hat openshift data science Right, and we're really excited. I think about the next phase of doing some of this hybrid Setup we're having, you know edge devices that can be like fleet managed by Open shift data science where you have control plane in the cloud And you've got some edge devices that you know either call home just to send some telemetry data or you know Just just not necessarily sending all the data. They're processing back to the cloud Because I think that's really the the future is we're going to have these Low power devices that may not have great internet access But they need to be managed right and they need to be connect back to a control plane like open shift Yeah, and I'm just going to put another plug in for the energy industry That is like freaking amazing because you have a lot of oil fields That are, you know, not that close to a lot of good cloud services So we have to do a lot of some of the inferencing on Some of the fluids that we're pumping out of the ground or health of the well-hand. We have to do that That work away from the mothership so to speak So being able to do that, you know, first before we get the rest of the information Makes things so much quicker for the geoscientists or even the engineer who's looking after the the oil field So that he can make decisions quicker Especially if there's something that could be failing that could be bad So we want to know that about that quicker and not have to chunk through data and the day later say oh There's something happening when Something bad already happened, you know, yeah, you know just parts where right like that decreases efficiency and you know Just knowing that right like knowing that okay We're reaching this point where like our our coefficient is off and we need to replace a part then That's incredibly powerful to know that sooner rather than later, right? like Surely when you have a couple thousand well heads in a in a in a field, you know, exactly Yeah, you're never gonna like you're never gonna be able to inspect that every day Unless you have a thousand people going out to those because usually oil fields are huge vast expanses of land Yeah So a lot a lot of good comes out of what we're doing. We're helping people. Yay. It's awesome Exactly. That's always awesome And it's always great to help people. Yeah, exactly All right, anything else you want to share here or I know we had some slides. Let me drop those in chat folks for your benefits Even though we didn't technically use them. Uh, we also have a intel.com slash open vino dash customer dash Open vino dash success dash stories and I'll drop that for you chris. Thank you I was trying to type it as you were saying and I was like, oh my god It's a long one I was good until the second dash and then you lost me So if anyone wants to learn about some of those use cases and hear about the partners like bmw g healthcare seamen's and who are doing solving some of these challenges and hopefully more of your customers Will join this exciting Board of success in the future Yeah, this is pretty awesome for the oil industry or energy industry I'm gonna drop in the stuff for open seismic because I just think it is freaking awesome. What's being done there? Cool. Let me drop that in chat here for everybody Yeah, that's awesome stuff Cool, uh, there's no more questions If there's no more content we can wrap here Tirely up to y'all how far you how much longer you want to go? I need coffee so okay. Yeah, let's let's get let's get uh, let's get ryan some coffee here folks Thanks for tuning in uh coming up next at the uh in 17 minutes We will have the what's new and open shift for nine briefing Joined by multiple product managers from the uh open shift team So please stay tuned for that and there's a full slate of shows on the calendar today So please go check out that calendar and join us for any of the shows you might find interesting Thank you. Audrey. Thank you ryan really appreciate it This data science stuff always fascinates me and this was again truly fascinating. So thank you very much for it Thanks for having us. Yeah. Thank you. Stay safe out there folks. See you soon