 Good morning, everybody. Thank you for joining us today for our license plate recognition workshop within this workshop We hope to give you a fantastic introduction to Red Hat OpenShift Data Science Joining me today are my colleagues Irwan and Carl, but let's go ahead and Introduce ourselves I can go first. My name is Carl Eklund. I'm an architect here at Red Hat Working on the Red Hat OpenShift Data Science team So it's very exciting platform and we're excited to show you what we have for you today Hi, everybody. My name is Irwan Granger I'm also an architect working for Red Hat and I look forward to working with you on this today. Good morning My name is Audrey Resnick. I'm a data scientist I'm working for Red Hat and just joined last year and I'm looking forward to Going over this workshop with you. So with that, let's take a look at the agenda That we're going to be going over with today What we want you to experience today is how Red Hat OpenShift makes it very easy to get started and To get going rapidly to develop test and train models To do that, we're going to show you how Red Hat OpenShift Data Science Allows your data scientists to obtain the tools and technologies that they need to actually get their jobs done Of course, specifically, we're going to be looking at the Red Hat OpenShift Data Science Platform and we're going to use that to recognize license plates in car pictures and extract the number from an identified license plate Now throughout this workshop, we're going to be here to help you out and to answer questions But first, let's take a look at what Red Hat OpenShift Data Science the product actually is Now I've got a blurb here from our main page, but essentially Red Hat OpenShift Data Science really enables companies and their data scientists to solve critical business challenges or problems and that's by providing a fully managed cloud environment on Red Hat OpenShift dedicated or the Red Hat OpenShift service on AWS Essentially, what this allows the data scientists to do is to carry out their machine learning workflow without having to become an OpenShift expert Being a data scientist myself, I do appreciate that because at the end of the day I just want to work on my code and I don't want to really worry about the infrastructure I just want to have the tools available so that I can actually go ahead and get my work done With Red Hat OpenShift Data Science, you're able to quickly create models We're going to be using Jupyter notebooks today But within those Jupyter notebooks, you could have TensorFlow or PyTorch You can also use NVIDIA GPUs and all of that is without again worrying about that underlying infrastructure. Mind you, if you're in DevOps, you'll be interested about that So we'll talk a little bit about that underlying infrastructure Really what we want to do at the end of the day is to be able to Consistently export these models to production in what we call a container-ready format If you have something in a container-ready format, you're going to be able to export those models across hybrid cloud multi-cloud environments and edge environments and Red Hat OpenShift Data Science really provides the data scientists access to these hybrid cloud services and compute acceleration For example, the NVIDIA GPUs without having to file a ticket with IT So how many of us have been in an environment where we wanted to do something and we have to file a ticket and you wait One or two days and it's not fun So we're hoping that this Red Hat OpenShift environment will give you the power to basically go ahead to do what you want to without having to file that ticket with IT So the easiest way to understand the Red Hat OpenShift Data Science platform is to think about the various tasks that data scientists need to perform when they're building and Deploying a model and we've broken down these tasks into four steps. We're going to look at each of these tasks Individually so the first one is to extract and transform the data And this is all going to start with data acquisition where we extract and transform the data now the data engineers can go ahead and integrate the streaming data from OpenShift Apache streams for Kafka or reaching out across the hybrid cloud to pull in data for analysis from multiple platforms and data sources The data engineers can then also work on gathering and Preparing the data to make it ready for the data scientists to go ahead and start experimenting and creating their machine learning models and An example of a managed service will be talking about those is an item that they could use is Starburst Galaxy So for example Starburst Galaxy is what we call a fully managed service to access your data using Trino That's a premier SQL engine and it's fast Gives you fast access and flexible management of your data the next task or The next portion of developing a models is be able to run experiments and develop models so the data scientists want to develop the machine learning models and An example of a managed service here that you would use would be Jupiter hub You're actually going to try this out in the workshop and that's going to allow you to create Multiple Jupiter notebooks for your experiments. Of course, if you're like me, you're going to start off in one notebook and go I've got another idea and you're going to lift part of that code Into your next notebook and continue on with your experimentation Until you feel that the model is at a stage where you want to try deploying it Now with the Jupiter notebooks, you can pick which packages and libraries for Python That you're going to find useful to work with for your particular problem So you may experiment with numerous packages such as TensorFlow, PyTorch, Scikit-learn and others The next step is to deploy models in an application You can really simplify and accelerate the process of actually deploying and manage managing your machine learning models Once you have them developed, you can use what we're going to use today Which is our source to image templates to deploy an endpoint for testing Within the Red Hat OpenShift data science platform, you could also have services such as celled and deployed for model serving Typically your models are going to be contained in what we call intelligent applications and those are deployed and then the Machine learning models will start inferencing or making predictions based on any of the new data that that model may see And of course, if you use a managed service such as celled and deployed, that's going to help you also build your pipelines to deploy your models So let's take a look at the fourth and final task and that's really to monitor the models and Track performance. So your work isn't going to stop once your model is deployed. You have to Continuously model You have to continuously monitor and manage your model in production to make sure that those models are making the right predictions and When you're taking a look at the the models you have to have some way to monitor them and track performance because if your model starts to drift you want to have some sort of alert that occurs and If there's drift that occurs along with that monitoring you should be able to retrain that model easily and Again, you can continue to use a service such as celled and deployed or even Watson machine learning and Watson open scale for model monitoring and Performance tracking to know when you need to retrain and deploy Now when we look at the overall picture here for a data scientist with these services We're looking for flexibility for the data scientist now. Keep in mind that for any IT ops or dev ops Flexibility can actually be a nightmare because they want a very reliable stable reproducible environment as well as that stable reproducible Infrastructure for their customers and we're going to address that next as we look at the underlying infrastructure For this red hat open shift data science platform So if your data scientist you can kind of tune out a little bit But if you're a dev ops or IT Interested in what the underlying underlying structure is you'll probably want to pay attention At the very bottom, we're going to have the main infrastructure or hybrid cloud platform and it should consist of a very consistent experience across on on prem in public clouds and With the public cloud, of course, we're going to be on AWS and edge locations and this all should be efficiently and easily managed by IT operations The next layer is the compute acceleration so the hybrid cloud should have interactions with Hardware accelerators we use NVIDIA GPUs to help speed up the machine learning model and For the development and any of the inferencing tasks that the model needs to do and then we have these self-managed services This has to be the self-managed services. They have to be supported on the self-service Hybrid multi-cloud platform and that should empower the data scientists and data engineers and the software developers To be very agile and collaborative throughout the whole process The way that they're really going to be able to be collaborative is to use a number of the open source tools and Capabilities that we have and again, that's all without depending too much on IT operations for individual tasks. Remember, we want to avoid opening a lot of tickets So that we can go ahead and start getting our work done Let's just take a look a closer look at these tools and capabilities that we're talking about so when we talk about tools and capabilities we really want to deliver common data science tools as the main foundation of The AI as a service platform that's integrated with our partner cloud services if we extend these tools tools further by adding open source and Partner tools we can do this through a shared UI We'll be looking at the UI shortly when we start our workshop But if we take a look at some of the tools and capabilities that you're working with today We'll be looking at Jupyter notebooks so within Jupyter lab we'll use Jupyter notebooks and that's going to be to conduct your exploratory data science on these Jupyter notebooks that will talk about the license that will demonstrate some of the code that we've built for the license plate Workshop so again an example of the managed service, which is Jupyter hub and that allows you to create many Jupyter notebooks for your experimentation and While we're in Jupyter hub you can determine which packages or libraries for Python would be useful for you to work with So you may want to experiment with TensorFlow PyTorch many of the other packages that may be available In our instance, we're going to be working with TensorFlow today Then we have source to image once you go ahead and have your models developed You can use our source to image templates to deploy and Endpoints for testing and of course you do have access to Selden deploy for model serving actually I should mention Selden deploys coming up in a future version for right now we're going to be using source to image and Typically again as I mentioned your models will be contained in an intelligent application That is going to be deployed So you'll want to have the models start inferencing and Making productions on any of the new data that they observe So that's really really nice, but you're going to ask yourself. What do these features actually do for us? What do these services actually do for us? So there are four key features that were were kind of given that we should really Consider the first of all is with the managed cloud service. There's no managing infrastructure. It's really helpful We have increased capabilities and collaboration. So we have access to this whole Ecosystem or open source ecosystem that has what we call ISP or internet service vendor Certified software that's available on Red Hat Marketplace And then we also support core data science workflows and this is from again from developing a model to integrating it as part of a larger application and Roads Red Hat Openship data science is really ideal for that rapid experimentation that you're doing you can take things that you've developed on the platform and Export them and run them elsewhere, whether it's on-prem or if you're going to be wanting to Do something in the public cloud arena again in our instance. We're going to be using AWS and I'd like to make this one statement here is remember you don't want to tie yourself to one cloud vendor You should have the flexibility to move your solutions back on-prem or to another cloud vendor or provider and You can actually accomplish that with the way that we go ahead and build containers for your for your applications or solutions So at this point in time Red Hat Openship data science is in beta and we have some initial release components So we do have internet service vendors as a service. So you're going to be able to go ahead and See Jupiter hub be able to in Jupiter hub go ahead and Create Jupiter notebooks That you would use for your experimentation We do have also Red Hat Openship streams for Apache Kafka and the Red Hat Openshift data science again. That's what we're going to be looking at today I remember all of this is going to be sitting on top of your managed cloud services Yeah, for our instance the Red Hat Openshift data science service is going to be on the Amazon web services and With that you're going to have compute acceleration. So again, you're going to have access not in this workshop but you would have access to GP or GPUs or Sir Nvidia GPUs and then finally you have your your cloud infrastructure Now in the future when we fully integrate our partner ecosystem You're going to have customer managed internet service vendor software and Internet service vendor managed cloud services. So you're going to have a plethora of Tools to actually choose from and you're going to have a very rich platform Where you can pick and choose the services and applications that again make the most sense for your particular business problem So with that overview, let's take a look at what we're going to be doing in our license plate Detection workshop so this license plate detection workshop is a project actually that's being undertaken by one of our colleagues Guillaume Moutier for Metro London and the main objective is to monitor the traffic movement car registration fees through license plate detection We have a Machine learning model that will actually detect the license plate on a vehicle And if the vehicle is angled the license plate is going to be righted And then the characters are going to be gathered through the machine learning that we have in our model Once we extract those characters that data is then stored read or analyzed through Kafka Just for folks who don't know about Kafka That's an open source software that provides a really good framework for reading or I should say storing reading and analyzing streaming data So for this instance What we could do is we could be looking for a particular license plate and you may actually go ahead and throw an amber alert For an identified plate that is of interest to the local authorities we can go ahead then and data is then stored and read and analyzed through Kafka and As I mentioned we could have the amber alerts but at the end of the day what we want to do is really then just store the data and We could create a vehicle registration database that would contain all the license plates of the vehicles That we've captured going through the Metro London area and Then finally we could go ahead and perform business Analysts business intelligence and Look at analytics tools that would further Look at this data that we've stored so that we could really get a better idea of say of traffic movement Within certain parts of the city can determine if some areas are congested and we could also determine if we need more parking within an area That's a really rough overview what we would like you guys to do now is To type in the following URL that's bit.ly slash ods Dash sign up what that's going to do is it's going to take you to a spreadsheet and We're using the spreadsheet to track how You are going to be getting a username and password for this workshop and For example here when you come into this spreadsheet, we want you to put down your first name and last name That's going to be associated with the username and password password is the same for everybody It's Rhodes demo. So for example, I entered Kathy Smith She would be using username user 2 and then she would have the password Rhodes demo And what I'll do is I'll give you guys a few seconds to get into that and I'll mention that what you'll want to do is actually open up three browser windows because you'll notice that we have three URLs here One of the browser windows will be for the workshop instructions the other one would be to get into the actual open shift data science platform and The last one would actually be going into open shift dedicated And you'll be using that to actually deploy some of your to deploy your containerized application so what these windows are going to look like is Again, if you go into column E on your spreadsheet, you're going to have the workplace Sorry workshop instructions and these are the instructions that you're going to go through and we're going to go through some of them with you Column F is going to bring you into the red hat open shift data science Work area and that's where we're going to be launching Jupiter hub and again these instructions as What you click on in these these various browser windows either for the red hat open shift data science work area or within red hat open shift dedicated They're all going to be listed here within the workshop instructions and Then again finally column G. That's going to get you into the open shift dedicated Platform, okay So I'm just going to ask this question Is there anybody that was unable to get the spreadsheet and to be able to pick a username to start with? you can list that in the chat or within the Q&A doesn't Look like we have anybody who has had any issues So what I'm going to do is I'm just going to stop sharing my screen because I'm going to go ahead and get into the main One of the main workshops so that I can show you What that's going to look like. I'm just going to wait for everything to catch up If you open up that URL to get into the red hat open shift data science workshop You're going to see an introduction The way that this workshop is going to run is we're going to allow you The flexibility to run in and work through the workshop up until let me check my notes here until step 5 When we start talking about How you're going to run an application what we want to do is we want to have Carl to explain How we're using the API with your prediction? Dot pi which would be a prediction function within your model and Then we're going to continue on then with Irwan talking about how you would set up your open shift dedicated environment Again those instructions are going to be in this open shift data science workshop Tutorial that we wanted to talk about a few things within those steps just to make sure that everybody is okay with The actual steps that they're going to be performing So we're probably going to give you around 15 minutes or so to work through the first bit of the workshop that's going to Be able to take you right into Choosing an Actual image to work with and I'm not going to spoil the surprise for you But again once you get to approximately step five You're going to pause and wait for us to kind of go through that information And I'm just going to ask again has Everybody at least gotten to this point where they were able to open up the workshop tutorial If you haven't been able to open the workshop tutorial or the main open shift Red Hat open shift Data science platform or work area. Can you please let us know within the chat so that we can go ahead and give you some assistance Okay, with that you guys have 15 minutes will reconvene at a quarter to the hour And If you have any questions or comments, you know, please put them in the chat We want other people to see your questions and comments because they may be having some of the same Same challenges or same questions. Okay, so the URL to the lab Let me get that for you really quickly so Carl or Erwan do you have the bit Lee? You are all Yeah, you want me to pay Okay, so We'll do that Coming right so that's the URL to the main sheet and then that sheet has The three links to the instructions the open shift console the roads console Hey everybody just wanted to remind you all that we are available in the chat If you have any questions or any comments or you know, we can respond to you directly there or happy to Address it as it's probably a question other people have as well So we have about five or six more minutes before we move on to the next step We just want to make sure that everybody's had a chance to get through steps one through four All right, so it's about 15 till I Think at this point unless there are any objections posted in the chat I think we can move along on to step number five So let me share my screen all right So step five like the previous four steps has everything you need when you come back to this workshop And you check it out for yourself Or you just want to you know refresh your memory in a couple months But I would like to give some voiceover to what's actually happening here So steps one through four you had a chance to use a notebook environment to try out and build a new model and notebooks work really well as an interactive tool and They're specifically tailored for a data scientist doing data science workflows But once you have a model running and working the way you want it to Really what most groups do is They will move that model into a production environment Notebooks really aren't the best option for something like that as you experience its manual tool It's interactive you execute a cell to run the code that you're looking for What we would like to do instead is to serve our model using a restful API We've chosen to do that using the quite famous flask Toolset so for those who aren't familiar a restful API. It's really Very simple it is an endpoint that you can make HTTP requests or post data to the API and have it return a result But that result is based on the model that you just built right so it's pretty straightforward But very very effective in production environments So the work that you've done The model that you built we can extract all of that code and put it into what we're calling a prediction.py file The name doesn't matter the important thing is is that you've taken your interactive code and you moved it into a Python file that your flask application or in this case the WSGI file Accessible in that left pane in Jupiter lab that is Our flask application will grab all of the code that's within your prediction.py file and use that in the headless production environment So if you haven't already, please take a moment to check out that prediction.py file You'll see that it's very similar to what you were just using in notebooks. I believe To and yeah notebook number two The WSGI file imports the pertinent functions and then exposes them through two Restful API routes so two URLs. So when we open up Notebook number three You'll see that we have a handful of system calls one is to install the appropriate requirements And then the second one Is to launch the web service Notebook number four is another notebook that we have set up so you can post data To your exposed endpoint so the notebook environment lets you Build your application your flasked application you can test it on a local host and then you can make Calls or posts to post data to that endpoint and your model will run and return a result to you That's all fantastic, but it's not exactly the production environment that we're looking for You had to open the notebook and execute those cells manually But in an environment like OpenShift for example We want a pod to and container to handle all of that Automatically for us an air one in the next step will actually show you how you can String all of this together use source to image and get as a DevOps workflow to put this into production So I'm gonna hang out here for a little bit to give you all a chance to Run notebook number three and once that's running use notebook number four to make calls against the API What do you think Carl? Do you think they've had enough time to run these two notebooks? I think so I have confidence in Okay Right, so guess it's gonna be my time to show my screen Alright quick sanity check Carl you can see my screen. Okay, not if you can I can that's great Alright so at this point so this is Step number six. You can see that here in the URL Packaging our application. So what Carl talked about is Once we were done doing the work in the notebook. We package all of that as prediction.py Now, what do we do with that? Do we email prediction.py to somebody and say, hey, you know make it work No, that is so 2008 There's probably not a very high chance that any of that would work if you think about the dependencies With what version of Python it needs what packages all these kinds of things. That's what containers exist for So in step number six, that's what we do. We show an example on About how to package this application as a container image and then how to deploy it Now there's multiple ways of doing it the way we show here in this workshop Uses this thing called source to image. So when you log in and you create a project I think I had it somewhere on my screen Too many times open as usual, alright So when you click on this add button, you have the choice you can you can add applications in many different ways If you like to live on the edge, you can just go. Yep. I'm just gonna type some yaml That's the kind of person I am From scratch, right? Nobody wants to do that. So I'm gonna back out of that and instead what you're gonna do as part of these Instructions is that you're going to deploy from git Now I think I was looking I think three or four of you have already done that so kudos if you haven't done it I'm just prompting you now to move on to this exercise and just explaining quickly What this is gonna do is that it's going to reach out to your github project And it's going to detect that it's a Python based project and it will do a few things Almost simultaneously. It will take this famous prediction that by it will build a container image from it And then it will deploy a pod from that. So that takes Maybe a few minutes. Obviously, it's gonna depend on the actual environment But let me go back there and let me go back to the topology screen and eventually This is what you want to see if if you see this that is looking good So this means that I have one running pod and it means that my My model has been deployed If you click on this little guy here, you can see that this is the part that did the build So I've built my image and then the pod is running from the built image If I had made a change in my github project, for example I could just click on this start build and you would see here how it does a new build of the image It's kind of useless here because I haven't changed anything But it would rebuild the image and then it would get rid of the old pod and start a new pod from the new image So it's a very fast and efficient way to iterate over those kinds of steps And a nice way of doing that So I'll okay scroll back up to the top here So I'll let you do this Package your application You will see that it takes Yeah, the image build takes maybe two or three minutes before it's active And then I'll go on mute to stop disturbing you while you're working on this And we'll kind of reconvene in five minutes to do the testing part of this And if you have questions feel free to put them in the chat Okay, so I think I see many of you have kicked off the build process. I I can see it's not finished for everybody So if you're still working on that keep keep working on it If you're waiting for the build to complete then you can just listen to me So I'll I'll try to address a little bit what our lab is asking about in the chat Where is the docker file, right? So yeah, if you've been working with containers, you might have used docker files To create your own images and usually this is something you might even do on your own laptop The source to image is an alternative way of doing that Where the build happens on the open shift cluster itself? And it doesn't leverage a docker file in the same way. It Really leverages kind of conventions and defaults that allow you to Have those builds happen. I think do I still have it on my screen? Right, so here what you see has come in this bigger. Yeah, you see this screenshot here once you put in your git project and It detects that it's a Python project. This is where it Picks what's called the builder image version It's not exactly like the from in a docker file, but it's it's close enough In that sense. So all of this will happen straight on the cluster There's nothing for you to do on your own laptop and the image is going to be saved in the embedded registry in OpenShift Let me just see Okay, I'll put the GPU question aside from that for now But we'll we'll answer that in a minute. So so what does this do? It is I explained. Yes, it Creates a build. So this is my second build Like that figure for you and then this is my running pod You only see one running part But when I got the new image, basically it got rid of the old one and created a new one And then the last thing here Okay, the last thing you can see here is a route and if you've clicked on this hyperlink This Yeah, I mean it doesn't get any better than that right status. Okay, I'll take that any day The interesting piece here is that this is a public-facing URL. So you can you can send that to anybody There's I mean currently it's it's wide open. You might not want it to be like that for real production workloads But anybody with a picture of a car could send it to this URL slash predictions And if we can detect the license plate what they'll get in return is a license plate number so How do we test that right so if you if you got to the bottom of this There we go head to the next section to do the test testing the application so What we can do I'm not going to do it right now But you can do this from your own workstation, right? So here the the URL would need to be modified, right? You would have to use your own private URL to be able to do that But if you're on a Mac or a Linux machine You can pass an image to the URL and you'll get the license plate in return If you are on Windows, you can do the same thing with PowerShell So the example that we have here just so that we can be sure that everybody is able to do that oh Okay, I'm seeing the question in the chat. I'll address that in a second so From a notebook you can we have a notebook that's ready for this the number five So I think I opened it earlier Too many times there we go So this is notebook number five and what you're going to want to do here is That you're going to want to update this my route thing and you're going to want to put in your own URL so that it Hits your own environment right this user 151 that's me So yours is going to be slightly different and then if you run this or just rerun it from the top Boom boom boom. So there we go prediction. This is what I get in return. So when I pass this So the question in the chat. Yeah, if you keep receiving crash loop back off See, can I toggle to yours here? I can't toggle to yours directly, but I can guide you through the process so When you added the project when you went here to get you were supposed to put in the URL and Here you were supposed to put in the name of a specific branch If you if you've skipped over this you're gonna get a crash loop back off. So how do we recover from that? One way I believe here Okay, let me just make sure Yeah, I think that's it. So if you click on the by the way Your view might look more like this, but either way it doesn't matter. I like to see the the graph view So if you click on the little D there for deployment and if you click on actions edit license and Such this brings you back to the original import from git and so if you made a mistake here If you forgot the app you can type it in here and then you can click save So that's a way to if you if you missed this step, this is how you get back to it So once again You click on the deployment level here and then drop down here on the actions and then edit license Blah blah blah up dot it Okay So hopefully that gets you out of trouble If you're having this constant crash loop issue. Oh, and once you do that Yeah, you you probably are gonna want to Come back to the build screen and do start build again Right So have you been able to run this? send image Okay, I tried both curl and oh five from notebook hitting internal server error Okay, that should not be the case. So the first thing to verify will be that your Route here is working. So when you select your app and you scroll down to the route You want to see this status? Okay. That's kind of the first step You want to make sure that when you copy and paste this as your route Put it the image from Google and it works. Oh Okay Okay, so honing in got some success there. That's good Let me see see why time let's see which user Thank you my user 18 Take a quick look at this Okay, so the status is okay here. Let me see Test it out from my environment. All right. Well, it works for me. So make sure that Yeah, make sure that you have the right reference to the right Route and then that should work for you as well Okay, and to answer the question. Yes The git reference should be app APP lowercase. Any other questions or Issues. All right, so I think I'll let Audrey Take over Okay, everybody So hopefully you've had the opportunity to work through the entire workshop material If you are still working, that's okay We are going to be online until half past the hour So if you are still working don't worry keep on working ask questions if you have them and We really hope that you have Learned something from the red hat OpenShift data science platform in terms that it is quite easy to go ahead and work on a model go ahead and test out your model and Actually go ahead then and deploy your mode model on OpenShift dedicated. Oh Yeah, so what is the NVIDIA supported model anything in video will work Help it AMD. So right now. We just have this working with NVIDIA GPUs. I believe that we're using the 100s and I don't have a link for anything else for the NVIDIA support at this time right within the actual Platform I forgot to mention where you see the services There is also the opportunity to look at documentation for the various Internet service vendors. So you should be able to learn a little bit more about NVIDIA there and and Chris is right Right now this entire Service line is sitting on top of AWS. So it's public. So we don't have on-prem NVIDIA options at this time. However, that doesn't preclude you from testing out stuff locally on-prem if you have NVIDIA you kid always Use a pie charm IDE or something like that to test out your model There and test it against NVIDIA GPUs on-prem and then move it over to the cloud And Chris just answered the the other question Yeah, and I'll just add to this that it's not enabled in this particular cluster But when you're launching your notebook if you have Instances that have GPUs in them then at the time of launching your notebook You have to you have an option to choose. Do you need GPUs or not? You you probably didn't see it in this one just because we don't but if there were some available You would have been able to see that And just a secret using those GPUs are awesome, right? Don't tell anyone don't tell anybody but it's awesome now as seriously If you do have GPUs enabled as our one mentioned It would show up in that area when you're creating your image for your your chip or notebook So if you had one or two or three or four could you be lucky to have as many as six? Then you would be able to to choose how many you wanted to to work with at that time All right So again, we thank you very much for your time and attention during this workshop We are still going to be online until half past the hour So we do encourage you if you're you're still working to keep on working and try to finish up the workshop And if you have any questions, please post them. We would be happy to answer them But from all of us, thank you so much for attending our workshop today and trying out red hat open-shift data science and Somebody said that it was easy. Yes, that is what we are striving for easy as a data scientist I want things to be easy. I just want to worry about creating my code. So, yes, thank you for that comment So, yes, we have the comment that the model worked but with more complicated Plates well. Yeah, our model isn't totally perfect But if you had this model and kept on training it, we would hope that it would be Very effective in and reading the license plate numbers and thank you Sophie for Dropping the URL. I was just going to to grab that if you guys do want to learn more About red hat open-shift data science. You can reach out to your own account rep or the red hat Account team for more information on how you could get access to the product Right that yeah, that would make sense Because I use the default image. So it it worked for me But if you put a picture of your own car and maybe the picture is blurry or something Maybe it wouldn't recognize it. So, yeah, that could lead to that kind of result Yeah, I was just going to say I tried that out with my own vehicle And I think after the sixth Picture that I took and uploaded it finally recognized it But yeah, so you can drive into central London and not be worried about getting charged for it. I guess At least this point in time It's a bit far maybe and just in case you wanted to double your introduction to red hat open-shift I've also included the link off of red hats main site and again, it's just a blog and it's going ahead and it's Just kind of going over and touching on some of the points that we touched on in this workshop Just in case you want to like send that URL to your friends and say I you know Participated in this awesome workshop and this is the product that I learned about Just in case and I should mention that that blog was written by our famous Sophie Watson, so it'll be doubly worth your time to take a look at the blog I can probably answer that question. So the question is what was the meaning of writing app in git reference? I was gonna show you on my screen, which I'm not sharing so that's not good So simply in this case It's the name of the branch that we want to use so the git project that we use has multiple branches There's main as app stock and the thing we want to pull in comes from the app branch But it is a git reference. So it's usually very generic. You could put anything in there You could put an actual commit or you could put a probably a git label or something but in our case app is just a branch that we go in and get from the git project and Yeah, okay good to know that it worked Hey Erwan, are you looking at the question by Han Yen? Yeah about the public DNS not being resolvable that is surprising to me Those route names should work from anywhere I Haven't been able to test the PowerShell myself. Maybe I'll need to spin up a Windows VM to test it out But that should work anywhere. So Okay, and Florian, that's a really good question regarding what the difference is between OpenShift Red Hat OpenShift Data Science and OpenData Hub You can think of OpenData Hub as the beginning of it all There was such great interest in OpenData Hub that our fellow engineers and managers took a look at that and said, huh It's like why don't we actually elevate that service to actually work within the cloud and add more Intranet service vendors to offer a lot more products, so The Red Hat OpenShift Data Science. It really is just an evolution from ODH Han Yen, I'm looking at the URL that you pasted in there and It doesn't it doesn't look right because usually they have your User number in them and I'm looking at your environment. You seem to be user 8 so that Was the URL I was expecting you would have so I'm not sure where That's not sure which one All right, so we've come up on the end of our workshop Again, we do thank you for your your interest in time and being with us today and There were a lot of great questions, so thank you so much for those those questions. Those were very insightful Remember if you do have Some more questions you can reach out to your account executive or to us and sorry I have some anxious Dogs because they didn't get their WALK this morning since I was up so early at 5 30 So again, thank you so much everybody and we hope that you enjoy your journey by using Red Hat OpenShift Data Science Thanks everybody. Bye. Thank you