 So what do you think will take your jobs next? No, it won't take your jobs try generating code out of it and see you'll feel more secure about your job So so that's the thing. It's it's more. There's a lot of it's called FUD fear uncertainty and doubt whenever new technologies Come out right like and AI is also one such one such technology and AI curriculums have been there for everybody who studied from 1960s 70s 80s and even right now Okay, but the thing is every Every level of computer science is built with different kinds of abstractions than what the previous generation so perhaps when let's say if you are If you're in the older side in the crowd like me, okay You might have studied right from assembly programming and various things in order to learn what how computers work and People before me might have studied the electrical circuitry what happens and how computers work So if you take a look at the generation now who are in the colleges They might be learning prompt engineering in order to generate programs or anything But the thing is end of the day you're doing the same things like you're coding you're building your applications You're deploying your applications the guy the way in which you generate our code is going to be different And also the way in which these applications are going to be deployed are are totally different So let's understand the code behind these products. Okay, so how many of you have? I mean like how many of you use open source in day to day project delivery deployment everything, right? so It would be fair to say that open source software has eaten the world right like 99% of the So code bases would have some some footprint of open source software within them. It could be building tools It could be compilers. It could be Various other scenarios and like linkers nobody like nobody really studies Compilers linking loading any mode but all of these things happen in the happen in the back end and also to add to this complexity the number of new Software architectures and hardware architectures are also coming out like if I take a look at How it was it's like a full cycle right like when I started my career. I was working on Spark machines, okay We had we had to install Linux on spark machines because the Particular client or customer wanted more freedom out of the underlying hardware and if you take a look at the generation right now Again, people are writing code to run on secondary architecture machines Which are arm based and all so the kind of problems which are changing are also going to be different and you're going to start deploying your applications to specialized Specialized scenarios because the new new markets keep keep on coming out right now space tech is one of those markets, right? Application developers are writing applications which can be deployed onto satellites right and these satellites are powered by solar panels Which have low which which have low power, okay? So it is very if it is very interesting to see what kind of programming language Also, you choose in order to write your applications Which are going to be powered by low-powered devices and the same thing right now if you take three or four cars together The modern cars which are coming out you can have a very good GPU processing Data center right like that's the amount of chips which are getting into devices like cars and all of those things and maybe even the consumer side of things would be Even better. I don't I mean I hope they don't take it very fast saying that it predicts accident is going to happen in Three seconds, please pay for the airbags to be activated So hopefully it doesn't go to that extreme, but it could be okay because they can they can they simply can so these are a few of the things So what I wanted to say is open source and AI technologies have a similarity which is going on right now, so Backed like if you go flashback 20 two decades ago When people wanted to run open source software within their own Organizations and within that it was a very tough battle So most of you have fought those battles within your own organization where you wanted to introduce certain open source components When there is a proprietary alternative out there like it could be a database. It could be a programming language It could be It could be the architecture of the systems itself Like for example, if you wanted to move out of secondary architectures and wanted to come into let's say Intel sys based processors and it would have been a battle for most of you out here and These kind of battles are being fought a day in and day out, but take a look at what open source has done to the industry right now, right? It is very at least in the cloud in the infrastructure software world open source has won its battle Okay, who is writing code in a proprietary programming language right now? Most of us are writing code in Python Ruby go rust and for the satellites that using OCaml like like these languages, right? So what has happened is the similar battles which the open source the open source community went through the artificial intelligence Community is also going through the same the same kind of things people are thinking should be tested generate a way I within our workflows is it safe for me to put my Customer data or my company's process data on a third party hosted AI service Okay, these are the kind of doubts with CIOs and even developers have is it safe to run my is it safe to run My code and we have already seen so popular software projects like Apache if you generate the code nothing wrong with generating code with AI tools but AI is very good at repeating human mistakes So what mistakes humans would do in the code? It's been trained on the same mistake and it can generate accurate mistakes what the humans can do Okay, so what happens with that is every time you try to generate code like so you can go to chat GPT and ask Okay, write me a JWT Token with go programming language. It'll generate within next five or ten seconds It's going to generate the code out of it But the thing with a problem with that kind of a code is it is not domain specific under what context you're writing the code Right. So those are the things which are going to be More more and more we do have a tool which we'll talk about later called Ansible light speed you need to be able to train the tools for the context in which your Which are language domain is whichever which are business problem is that's where AI would get its strength So all the repeated tasks what you're doing, that's where once you start training the training these models more and more It is it is able to do things better. So and let's say for example, the code which is being generated I mean it could be using a older library because it might have been trained on a library which is Two years old or a three years old and it might have that kind of a domain knowledge And the second thing is the coding style. So major projects like Apache are Clearly mandating that if you have generated code with code generation tools point us to the origin where this code was trained on Okay, we we will not just accept the code the way it is, right But any day like like say certain tasks are becoming better and better most of the developers Find using generative AI tools to write testing for test coverage generate out the test cases and all it has been It has been enabling a lot of developers But it also like I said it is going to create more jobs because you need to sit and verify the code which is being Generated again. So so the it's called like Jevons paradox like it's like I have to give you an example in Bangalore How much ever wider the road they make people will buy enough number of cars to fill up that road Okay, so how much of an infrastructure you have within your systems? Okay, you're going to run more and more containers and VMs in order to fill those systems up, right? And the problem being at one generation like when I used to write my code I never used to think about the cost right because most of the customers which I worked with the client used to have their Own private data centers, okay, but now think about the code what you're writing all of this code Has a cost associated to it, right? What is the cost? I'm not only talking cost in terms of the actual price Yes, you're paying for the actual price the CPU consumption the network consumption the storage consumption of the code What you're writing all of that has a cost depending on the cloud Wender and it's also very dynamic in nature. The second kind of cost which I'm saying is why did you introduce Golang into this project, right? And why did you introduce a particular framework into this project? So what happens with that is? You are consciously introducing something some cost in order to get a benefit out of that, right? So if you're introducing the cost of writing Go within your project. Yes, you're getting the You're getting the benefit of concurrency. You're getting the benefit of You're getting the benefit of benefit of multi-threaded and writing more efficient program The flip side of it is End of the day, it's humans who write code and people tend to move jobs or people tend to move out of the project How easy it is for you to find another good developer onto your project, okay at that given point of time, you know the Sticking with the programming language like Java or Python where you can get people productive or you can get a person ready at any Even point of time is is cheaper So every time you think about any decision because what's going to happen right now is lot and lot of work is going to be generated The workload is going to take the AI systems are going to take a lot of this heavy lifting You need to be a judge of what cost is this introducing into my system For some it would mean like say if I'm generating a leave letter to send to my manager Fine, nothing would happen like you know taking leaves from from from your boss is a very common thing It is not competitive information, but Completed information would be if my top customers if you if you need to say generate something for a top customer And it using a third-party service They might be a problem where one it might not be legal if you're using these systems The second thing is it might be legal in your country But illegally in another country and it can also have its own biases which can creep in on to the customer data So so one thing what is going to happen is When you're introducing systems like say chat GPT or any other a Bard or any other AI system Many people are excited about this technologies But you as developers or architects need to need to be sure why you're introducing these technologies into your workflow Okay, because the amount of code being generated can become more that would again mean the amount of Infrastructure you're going to consume is going to become more and more. Okay, so we developers are very good at Inventing problems or reinventing the wheel. So it is a common thing like if there is a editor Okay, so it's it's a simple thing developers like to reinvent the wheel. Although we say, you know, they want to be productive Yes, they want to be productive, but they get the kick out of reinventing the wheel like say Just to write YAML scripts properly for Kubernetes There are about 20 editors for that because they didn't like the way how the previous editor implemented that Okay, so that's a power of open source If you don't like a way in with something is implemented you can go ahead and implement your opinionated way of Architecture and the same thing is happening with AI because you might not like the way a vendor is doing stuff So it gives you It gives you a position where you can build your own AI models based on open source So what we envision is people are going to run sensitive workloads on their own private clouds and also for the elastic point of view They're going to move certain of these workloads to the public cloud Right. So AI needs hybrid cloud like say for certain sensitive data on and right now There are certain chips which are managed been manufactured just for certain Computational tasks. So you have certain kind of GPU certain kind of for TPUs which are being done only for certain kind of tasks and having your own private data center and a public Cloud cloud provider to work in tandem is going to become very imperative and this is one of the skills which most of the developers need to Will start building and this is also one of the skills even the operation operation sites operation folks within your organization will start building and Now you now you hear about DevSecOps, right? So next year it would be ML DevSecOps So because you want to have the data scientists your developers and also your operations people working together Securely securely delivering the models and securely running their code pipelines to delivery So like so what Red Hat's opinionated view is like so what we did for Red Hat Enterprise Linux So over the past three decades, right? So how we curated the open source communities and how we curated the open source projects which will become popular when I mean popular is Can somebody take a guess how many open source projects are out there? How many of you think there are one million open source projects? 10 million 100 million Right, there are 100 million open source project if we take github and various And various publicly available source code repositories that there are so many open source projects of these open source projects Only 20,000 come into a mainstream distribution a mainstream distribution is something like a Fedora Something like a Ubuntu something like opens user and of these 20,000 a word only 10 make it into a enterprise Long-term supported thing. So why do you think that one happens, right? It's not because those projects are bad, okay? Most of the open source projects die very fast because they don't get a community around them, right? They don't get contributors like tomorrow any XYZ company can say I'm going to open source my project open source in projects Is easy even you can open source your code base in get a putter Get a proper OSI recommended license and make it open source But how do you build communities around these projects? So what reddit does is it's continuously you see the vast number of open source projects out there and most of you might have seen the CNC of landscape page So if you are a developer or an architect if you take a look at the CNC of landscape page You will be overwhelmed by the number of choices you need to make right So what reddit does is and what reddit is good at doing is it starts participating in these open source projects Well, there is a active communities around it and starts contributing. It's not just Contributing in terms of the code. It also is contribution in terms of the governance of these projects, right? Because many people are using it. There's a active community Active community means that a company like red hat can provide enterprise support for these open source projects for a longer period of time So red hat enterprise Linux has been very successful in doing that for Linux and another way another ways The number of applications which are being written are becoming more and more and this problem is being solved by Kubernetes the way in which you can deploy applications the way you can scale out where you can scale out to multiple clouds and We have done the same thing with the open-shift Container platform and open-shift container platform Lay I mean early this year has got more than one billion dollars of revenue and annual recurring revenue So that's how big the enterprise market is and that's how much the customers are willing to pay in order to have a true hybrid Hybrid model in order to deliver the applications so So what we've been doing with all of this is we've been trying to get all of the popular opens a popular AI projects ML projects working together along with your application Application platform which is powered by Kubernetes and to be delivered on a solid platform Which which is based on rel core so that it can run on multiple edge multiple points either the private data centers your edge nodes or even the or even thing so More and more open source projects in a ML are going to be adopted Few of these projects can become enterprise companies by itself because these are going to be a domain specific way and Open source is going to win the battle again in the AI and the ML space right now There might be few companies which have which have gotten a lot of market share But they have got it only in one segment of AI. It's not the whole thing. So this is where companies Are going to use more and more open source projects one in order to protect their data in order to protect their models in order to protect their In order to protect their choice and freedom what they have with their thing. So they're going to build specialized models on specialized hardware for solving their problem or Companies who want to try out like say maybe a large bank or maybe a large system integrator who want to try out This technologies are going to leverage public allowed because they want to touch the waters And they should have a way in which they are sent their private data and sensitive data is on prem But where they want to get where they want to leverage Artificial intelligence they can provision out compute on on public cloud and test out these data So open source watch out for a lot of open source project good open source projects Which are coming out like Lama index and various other projects which are coming out in order to? I mean in order to implement your Cloud and infrastructure workloads onto public loads. So this is how we do it So that's a thing the open source and artificial intelligence and machine learning The true way of in which large enterprises are going to consume is going to be Hybrid in the near future. So what we'll show after this is we'll show We'll show a platform where we are integrating because Enterprise customers have already invested in rel. They have already invested in open shift We are showing a way in which you can get or even your data engineers the data scientists work with your application developer teams And do the projects in tandem use the same infrastructure because just because a new technology came out the companies won't ditch their present Infrastructure and start a new infrastructure or something they are going to leverage their existing investments And with these existing investment with a platform like OpenShift you can have a true hybrid hybrid platform so So and like for example one of our biggest customers Delta They moved from their own private data centers to public cloud with the red hat OpenShift on AWS They were able to move 90% of their infrastructure because of OpenShift's power in which it can run on any cloud And also keep in keeping in mind the SLAs and SLOs in place They could better serve their customers once it went on the public cloud So the customers can keep on they can come from Private data centers to public clouds and larger customers also would would want to come out from public clouds back to private data centers It could be driven by the governance within that particular country or compliance laws which they need to which they need to are there too Like for example two three years back Mastercard and MX were Given a mandate that they can't issue new credit cards in India because they were sending the data outside, right? So until they could get there until they could get their Servers within this geographical locality They can so what does it mean for developers? Does he does he have to change the code bases? No, the underlying platform needs to handle all of these kind of changes and more and more even in India They're coming up with a personal data privacy bill and many countries are coming out with this Regulations where the data? Especially the financial data and the sensitive citizens data need to be within the geographical boundaries of that particular country or region GDPR is one of those examples So all of these things would work on OpenShift with with service mesh in place and also your data Your data pipelines in place and we'll show you how easy it is to set up the data along with your applications and deliver these Applications so you always have your ML teams working with your application teams and also your ops teams to keep on delivering these projects So this is the projects like developer hub will also be Easy in order to onboard your data scientists and ML engineers along with your application teams Who are already been onboarded and deliver your models and applications to them? I'll stop over Later in the afternoon. We have a dedicated talk on Ansible light speed. This is I mean this project has been trained only on Ansible scripts, right? It's not a generic Generic generative AI project So this project is very specific to a particular domain to check configuration diffs and also to make it very easy to have a Consistent ansible templates being generated So like these domain specific tools are going to be the future of cloud and infrastructure platform or air tools in Cloud and infrastructure platform. I'll stop over here and I'll give it to my colleague Irritation here will take you through a demo. How many of you work on jupiter hub notebooks? quite many right and Okay, that's good. And how many of you are actually doing machine learning? I think everyone right who is Okay, I see more hands than last year now So that's I think next year. I think everyone will be actually doing some sort of I have my red-eyed open-shift data science Right now it's called red-eyed open-shift data science. The new version is going to call be called open-shift AI so the moment I I Click on that. I actually get this open shift AI UI right like you saw the developer hub UI. This is an open shift AI user interface what you get when you actually log in into open shift AI Now you see here there are projects which we can set right like for example, I can create a data science project say DevNation project right 2023 So this will actually go and set up a project in my open shift here I'll actually set up a project blank project in open shift Right and go and interface everything is all Kubernetes native here The moment I set this up then I can actually create workbench So workbench is basically you actually say what kind of Jupyter Hub notebook you want and What is the configuration you want what kind of version you want right like for example? You want to use spy talk or you want to use TensorFlow or maybe you want to use open We know from Intel right okay all those things are built-in like we also are going to support trusty AI as well as part of your Workbench so you can actually create a workbench So DevNation Workbench 2023 right so I can select which image I want right so there are predefined images for example, let's say PyTorque I Can select my container size and I can actually select so this is all a data scientist Who is who wants to work on and interface everything with open shift right or a Kubernetes application platform? Assume that that's a data scientist was actually working here and everything is built in for that Including actually creating at the inner loop pipeline right where you actually pick up the data Do your training testing and then create the models push the models in as part of your inference Right if you want to actually so those models so everything is is here built in here for you as a data scientist So you don't have to go anywhere now. I have configured this environment with NVIDIA GPUs so currently We have NVIDIAs and we also support Intel Habana Gaudi Accelerators as well. So those are also part of the Support so those things you can actually get configure those kind of servers as well and get it Discovered by OpenShift Data Science or OpenShift AI. So let's say I want to use the accelerator here so I'll use one and Now I can actually define what kind of storage I want to connect Whether there is a user data connection, which I want to do or I have an existing user data connection, right? And then just create a benchmark So this will actually go and create a workbench for me it will start a Jupyter Hub notebook for me if the resources are available and So that is what I will see here right there are three different projects. I will see Now this is an interesting one Which I wanted to show and I'm more excited you all with the racing game, right? So this is where we actually will see a Generate AI based Jupyter Hub, which I will actually pass through and What we'll do is we'll actually convert text to image. So let's say I have an image And I will just say whether that image should be on a beach or in a mall right or in a house And that will actually generate and give me that specific image with that picture in the background. So that's the Basically a lot of defect images, right? It's one of the Use cases you can see here, but more mostly aligned with the generative a concept here. So this is my text-to-image Demo here if you see I have everything in place here, right? I have this workbench which is created I have my Jupyter Hub notebook here running. It shows me what packages. It's running here, right? This Lira is a very interesting component in this What you can do is here whatever Python programs or you have right or our codes you have you can actually align and Create boxes here like I did here So you can actually set up a pipeline like this with all your codes Okay in in pipeline and just run this pipeline This is most likely you are setting up a pipeline for inner loop, right? And then this will go and run in OpenShift environment using tecton pipeline And then you will see the output here again in your OpenShift area. So it will actually run the whole pipeline for you right from Getting data doing feature extraction doing training. Okay, then doing model generation even hosting models as part of model serving right everything will be built in part of your Pipeline which you don't have to do anything as a data scientist It is already done for you as part of and this actually the good part about this is Lira will convert this into tecton tasks And automatically push it into OpenShift data science or OpenShift platform Okay, so even the data science you don't have to worry about creating any pipelines or any tasks Just push your code here connected provide the inputs which it needs as part of that for example This particular task Needs some of these parameters which are listed here, right? So just provide those inputs and I think you are good to go as a pipeline, right? So I was here. So this is my workload. I have my cluster storage configured I have my data connections here, which are all object storage. This is where Text to image is where all the images are loaded here and I have my pipeline artifact object storage We can use any type of object story like OpenShift data science OpenShift data foundation for example Which is again a container storage. So any container storage can be used here To connect and store your pipeline artifacts So this is my pipeline which I have created which you just saw right? So the moment I run this pipeline it will go and create a tecton Pipeline for me. So I had already created one here, which got completed And this is where I have my models. So this particular program I created four models yesterday night It took me like 30 minutes on a 10 GPU single single GPU, right? So that's that's a that's the time it took So as Ramke mentioned, right? Genetic UI is there but it comes with a lot of cost, right? If you actually want to do your own model creations like The LLMs, right? The Lama 2 and all they are trained with like 7 billion parameters And I think 12 billion parameters, right? It's it's not a small thing to do actually it takes a lot of Infrastructure and cost to do that actually So what a lot of data science is what they do is you actually take that whole Lama Model you can just strip off the initial layer and Push in your specific content and you can build your model Okay, so you don't have to go through the whole model training and model creation You can just peel off the upper layer and try to embed your specific Whatever is your specific domain content, right? You can actually Add into that and create your own model out of that. So a lot of data scientists nowadays are doing that So here is my there are four models which I have I need all these four models for me to generate Text-to-image So these are the so I get to see what are the pipelines are set up here I go and select this I get to see okay. These are the three stages. I have in my pipeline I do experimentation then fine-tuning and then remote inferences and these are my runs here The ones which are triggered. This was a triggered run which ran successfully If I select this there was only single task ahead here The run output it shows me what are the details here and if I go to my pipelines, right? that's the Text-to-yeah, this is my pipeline which was ran successfully So that's what you see it in the open-shift application platform here. So let's say this is my So it's This is where I use NVIDIA a semi that's the command to actually query and see whether you are using GPUs or not, right? Now this command. Okay. This is where I'm trying to So we have a specific dog right which has a reddit hat here. This is a dog with reddit hat here And we actually want to ensure that this is the image with gets displayed with the background I want to suggest for example Let's call this reddit reddit teddy, right the dog name. So reddit teddy should be on beach So if if I see this the output should I should see an image of reddit teddy sitting on a beach, right? so that's the whole point of doing generate UI and In this case what happens right when I'm doing the experiment initially I get the dog faces, right? Very cute ones, but I don't get this reddit teddy Right, I see various different types of dogs if I'm doing training So what I'm trying to say is you take for example llama to similarly you take this particular Algorithm which we are using from hugging face. This was created by one of our colleague from open-shift AI view Chris The whole source code watch what we are seeing right now the use case. So he actually used some of the readymade hugging face Models and that's what we are using. So that's why we are not seeing our reddit teddy, right? Because it was not customized to my images Right. So what I need to do is I need to actually fine-tune So I need to have reddit teddy images. I need to make it learn right and Build a model out of that. So what I do in this particular fine-tuning stage is where I actually train this model Okay, I'm doing a training here this the training part I use some of the accelerators here to actually do the training okay using a training Dreamboot again, and this is a program Python code from hugging face which we actually use here and This is the training which actually took like 30 minutes on for this specific images so I did an 800 epoch and Like 200 images were trained From the reddit teddy to understand that yeah This is reddit teddy an image and then different images which needs to be catered to right to actually give you an output Okay, which needs to be like merged with your reddit teddy so this Once I finished this right it actually I export this into an ONNX format Which is a runtime format which we support on OpenShift AI This is the format which I convert my all my models into and it gets installed into my Storage, which is the AWS storage which an object storage So Yeah, this is where I actually move it and store it into S3. So you see all this and Then then I upload this into an S3 Storage once this is there right I actually Need to tell OpenShift data sands that you need to do a model serving using the four models Which we are generated and then using those four those four models we can then Go through the third Jupiter hub, which actually does inferences, right? Okay, exactly get the image understand the text and build generate a new image right based on those two combination parameters so this is where actually I Create a serving runtime I Actually use a custom model server. I used Triton here I think you can see the Triton image. Yeah, I see the Triton Image here. I create a Triton server Which is again a runtime to run the ONNX models onto that and then I actually push These four models which I have right from my storage You guys it's visible right back in the middle. Yes. No Yeah, okay, cool. I just wanted to check whether you're attending your identity or not, right? Good. Thank you So yeah text to image basically this is what I'm using at the path This is where my models have been stored on the S3 bucket and There are four models But they are doing different functionalities that is combined to actually provide you one outcome and this is my inference here so let's say currently This is you see this this looks like Teddy, right? Yeah, so Teddy is right now in a mall and I want I had trained it so that a dog image is Teddy so in a mall. So let's say I want to Make that on a beach for example, right the moment I I run this this is nice thing about Jupiter hub notebook like you can just write it in a night you can run it in a stepwise mode in this and Get the outcome so this will run for a Minute or so. I think it's using GP in the back end to kind of like Create the image once the image is done. You should then be able to see Whether Teddy is actually on the beach or not, right? So that's the image. I expected to get generated Okay, so how many of you have used open data hub anyone using open data hub? No, right. So you should definitely go and check out open data hub or maybe you should actually go and Explore that because it's it has some nice cool tools for you to use Here our Teddy is there on the beach, right? Okay, so I can say Teddy is in my house, right? So it will actually have apartment photo sitting behind that, right? So that's the generative Yeah, we are talking about here. Now what I do is let's say instead of Teddy Right, I want some other image to be trained and a new model to be generated, right? Every time I need to keep Pushing that fine tuning Pointing to my storage right instead of that I can parameterize that so that and then build a pipeline So what I do is I'll actually have this pipeline here Which I created using all these three. I mean you can use any depending on what kind of How you are generating your models, right? How you are actually getting your data Doing your feature extraction and fine-tuning and training and things like that You can actually in between. I don't have this model serving Thing yet, but that is a task which I can add as well so that the models are served locally on the inference and then You can actually use the third remote inference notebook here to actually get the inference like we did Now let's say I want to just run this, right? Okay, and say text to image. So this submitted a Tecton pipeline right into my OpenShift data science. So here that is this the pipeline gets created. These are the Okay, I see These are the tasks. I am running. This is experimentation. Then I do the Training and then finally the model serving part, right? So this is how you actually convert your Pipelines into real time like real Tecton pipeline. Okay, which gets executed and you don't have to worry as a data scientist Whether to configure everything, right? It's there, right as part of your editor. You can actually edit and create this pipeline and Just boom. Yeah, you just Executed and you have this pipeline ready running in your OpenShift pipelines, right? So it actually leverages and it's tightly integrated with the OpenShift application platform here, right? So So this is where you see the run output and you see all the runs here Now I mentioned model serving right where and actually these are the four models being served here and these are all Internal services. We are using gRPC. You can actually host this as external model serving also if you want to do model serving as a service offering, right? Okay, you can actually do it so everything can be an API connect using RESTful API and Like I do a gRPC. gRPC by the way works very fast compared to to The REST API very very fast actually. So that's why because this is all internal in-house into the same Environment, otherwise you can also use REST URLs like we have it here or you can externally connected through TLS extended Security and you can host your inferences and just do a API RESTful call and it should be able to call this Model and should be able to give an output to you So that's what we are doing here and this Jupyter Hub notebook The third one, right? So we are actually if you see here in this initially So we are yeah, so we're actually calling up this model serving It's port and these are the four models which we have hosted. We are actually calling that if I go to open shift, right? in my Workloads So I have this model serving and it has the gRPC port 8033 being mapped here So it's going and actually calling this and the inferences are running within this particular model serving so you can create our own model servers and you can Depending on what is your runtime for those specific models you can have customer model serving also for your specific requirements So, yeah, you like the teddy on the beach Yeah, sorry can't hear you. Oh Yeah, of course, so that's how you can you train yours your image and say you are on Eiffel Tower It'll actually show you an effort. All right That's a good use case Okay, I think I Think we have some time to take questions on OpenShift UI and Yeah Sorry, can we have Mike please that you can have your own notebooks and you're serving on times and all My question is is there a free tier available of and how is it different from Google's collab? Google's Colab Colab, okay We have any trial Yeah, yeah, so once you go to the sandbox which has shown in the mornings You can go console.redhead.com. Sorry developers.redhead.com So in in Ashwin's talk, you might have seen how to get access to developer sandbox So once you go to the developer sandbox The one so you can here you can start your sandbox for free So it'll take you to the customer portal But will I get a GPU option for the free tier? Yes, I need to check because they keep changing So, yeah Yeah So here there are different You can you can bring your own scenario or there are different scenarios out here which you can try Yeah, basic learning these scenarios are good enough like you launch your open-shift data science You create a PyTorch model TensorFlow model And you'll be doing all of this Activities hands-on on the open-shift data science sandbox. Sure. Thank you. Yeah, but it's not Really powerful enough GPUs so that you can mine bitcoins or something Training models and all you can Because we get I think a hundred GPUs you can run in parallel you can run two different Yeah, and that's something which I keep saying that you need to reduce the crypto I'm sorry cognitive load check for your servers and infrastructures passwords at least To be secure because one of the most common thing is then Especially the free tiers and all which comes people use it as what instances to mine for cryptocurrency So, yeah, so always secure your own systems and even your own laptops So if you are thinking of getting your data scientist Let's say you have an open-shift platform or Kubernetes platform You want your data scientist to actually be onboarded on that platform, right? Like your developers are so that's this is the best way to Actually start either use this or use an open data hub Okay, then they are right on your open-shift platform and they will not even realize it. Yeah So the thing is the data scientist wouldn't know what a CICD pipeline is or how he needs to manage a school What happens is he's just working with us Jupyter notebooks and all the application pipeline is being created in the back end and securely being deployed to the Target environments the same way the application developers are not sure what ecosystem they might or might not be sure what what tool sets and tools which the The data scientists in their organization are working, but they need to use the results from their From their output in order for their applications to be served So both of these things work in tandem like see let's say for example if you say the application developers would understand How a git workflow is okay, but a data scientist might not work Might not understand what a git workflow is and how do you do like git buys because their tools are totally different They're going to work with Jupyter notebooks are some other workspace like that So that way onboarding a developer ML and ML engineers and your engineers is very easy over here because they are not burdened with the cognitive load of learning how an application pipeline is being deployed They are just working on their own models. If they know it is great, but not yes Developer hub it becomes easy for developers to be onboarded right the same concept which goes here as well and with Alira there are a lot of enhancements going on with Alira so that the the whole of inner loop pipeline right the data science Specific or ML specific pipelines You you don't have to the data scientists don't have to worry Just drag and drop their Python codes or whoever is actually managing that whole thing right for example a data scientist They can just collate Just just pull in the Python codes from the data engineers from the machine learning engineers right and then the application developers And then just build their own pipeline to be deployed right into your production Yeah, and now this product name has changed from yesterday night We got to know it's open red at OpenShift AI OpenShift data science is part of OpenShift AI and what so it's Yeah, thank you. Basically, it's it's your code to deployments. I'll take in care from the single platform So I can't hear you So it essentially is code to your deployment. It's all While using your ML engineers also into the same. Thank you very much So like like I said if you have ML engineers within your organization Like develop developer hub you might have to write what tools and What are the things which they they need along with your application pipeline so your template might be a bit bigger But you can on board both the teams together on to product because What happens is in many organizations, especially the enterprises most of the developers might already been on boarded on to OpenShift kind of a ecosystem right there. They're already They are already in their Kubernetes pipeline, but getting the Getting the ML engineers onto this pipeline is very difficult So by using the developer hub template along with the underlying infrastructure, which provides everything It's easy for you to on board even the ML engineers on to the projects Yeah, we eventually we can have a Jupyter hub link there, right? Whatever you've seen the OpenShift application platform from the OpenShift AI route perspective right for Jupyter hub and it gets created as a workbench eventually we can actually have it as part of the the developer hub right But that's I think something Moet was mentioning that they are thinking of Yeah, right and on developers.radar.com is not just OpenShift most of our products So you can find the trials over here or even sandbox versions over here You can use it and I'll get like let's say most of our licenses are very portable Like for example, if you want if you're provisioning your OpenShift infrastructure in AWS in another zone on ARU You can take you can take your models from here and just deploy it out there You can try it out here in the sandbox and then you can Most of the sandboxes are available for 30 days But we have a slack channel if you want to extend it out there or if you already have Your compute from one of the cloud providers you can just export it onto that cloud also after you do your experiments Any we can have one more question if there's any other question any further questions Okay, I think then yeah and and also maybe in the evening we can cover we also have a slack channel So if you want certain kind of sandboxes to train up your teams or onboard your teams So you can come and request over there and we can depending on our cycles We can create the scenarios or we can create the environments. So there is a last track. I am doing here Again, that's more to do with GitOps and how it plays a role, right? But I'm actually going to touch up on ML ops through GitOps. So if someone is interested Yeah, please join that talk as well. If you are more interested in ML ops to be sure Okay, I think thank you for joining. Thank you for it. Thank you for life