 Yeah, hello everyone. How was your day zero so far? I hope everything was well. I am Mario. I am Customary the rearchitect a curematic and I don't know nothing about AI The reason why I don't know nothing about AI because I don't have to but we want to basically Yeah, run AI workloads somewhere and utilize them So I work for cuba magic. We build a multi cluster multi cloud Kubernetes management platform I also am part of sick contrabex comps for the cuba netis project and sick kids infra for the cuba netis project And I'm also acting as an ambassador for arm and I work as a gde for Google and I'm from berberia in germany Usually I have my little leader hosen, but honestly it was to cold in Paris. So no leader hosen this time When we talk about AI and AI workloads, it's always hard to Where do I run this right? So where do I actually run my models? Where do I train my models? Where do I use my? my Requirements where where do I have a platform to run this so cuba netis is an option to run AI and to train our models, but it's not a requirement. However It's a pretty well fit. Why is cuba netis a pretty well fit? We have Who of you is using cuba netis in production? That's good. So what are the benefits of cuba netis? It has self healing aspects It's scalable. You can run it Everywhere you can literally I run it everywhere. I can run it potentially on my phone now, which is quite nice but Probably most of the people won't do this. So we have a multitude of options And I bet all of you have problems running cuba netis clusters Or is everyone running cuba netis smoothly without any issues and everything is fine all the time If yes, please show your hand Oh one that's nice one person has no problems. So Running AI workloads and cuba netis has the benefit that we have the scalability we can stretch it out We can basically distribute everything but we can also utilize AI to Yeah slim our Our process and or to to fix clusters and for this There's an open source tool and it's a cloud native sandbox project since last year, which is called kds gpt Who has used kds gpt? That's not so many people. So kds gpt is as I said an open source project and It's help you triaging cuba netis issues with the help of AI You can implement it with 10 different interfaces. It's you can use chat gpt You can use Gemini you can use vertex i you can use AWS you can use all of the different vendors It also has a in cluster operator for consistent monitoring So you can basically run it in your shell or you just run it in every cluster and get logs of it sent to all of the all of the different environments The problem is i'm from germany We don't trust providers. We don't trust anyone. We don't trust Our own mother so the problem is We don't want our things that run in our cluster being sent to a public cloud provider or being sent to open Open AI or we we just don't want our data to go out. So We create a magic triangle inside of our environment, which means that we basically take What we want and run it inside of our own clusters This means we utilize three different things there. We utilize Our own knowledge that we have about cuba netis because Everyone has a has a slack chat with hey, how do I debug this? How do I debug this issue? Um, and there's also the the potential that we can use like hey, let's scrape cuba netis questions from stack overflow and see what are the Yeah, what are what are the common the common questions that we have and then we use local AI Which is also an open source project and I think you're now following me where I want to go with there So you don't have we don't have to pay for the Yeah, for paid services for AI because we can basically do this on our own in our own environment and own it And that's the the most important part and we can utilize a tool called kube flow who has used kube flow before One person to quite some persons. So we can utilize kube flow as an environment where we want to run this So kube flow is an ml a ml toolkit. It is there for Data preparation you can do train your models. You can fine tune your models. You can Have a serving where you can run your model and you can also have a service You have also the service management option To run the stuff the problem with this is We will only use a really really small part of it because we will mostly only use for this setup kube flow pipelines Why you would ask kdesk gpt doesn't support k serve Momentarily to basically serve this the good thing is kube flow can basically run Everywhere so you can run it on your laptop. You can run it In your home lab. You can run it in the public cloud. You can they are Providers to run it in on the big public cloud providers and it's a tool set that you can just install out of the box What is local ai local ai is also an open source project that basically do Provide you with rest api for That is compatible with open api specifications. So we can use local ai to Run our lm models Inside of our clusters Because local ai also has a hamchat. So we can use it in our clusters. We can deploy into it. We are a hamchat Then we have the benefits of running it inside of kube netis. So load distribution run multiple models at once We can use it to generate to generate images. We can use it to generate audio and The best thing is it doesn't require gpu So we don't have to have a gpu to run our models inside of our cluster. So we can utilize Our normal clusters there How does this looks in the end? When I said we are taking Our own fine-tuned model and put it into our own clusters to basically feed into kds gpt when we so when we still need to do and This is the hard part and because this is only a lightning torque. We can't get into too much detail there We first need to Prepare a data set. So you still go to the normal workflow of What would you do to fine-tune your own model and Along this part Is hilariously hard So this takes time and this is something that you always need to remember when you look at this whole setup that every step of this although it's running in your Own infrastructure. This takes time. This takes working time. This takes compute time. This takes money from your company to run this so Even though it is yours Think twice if you want to have the whole stack in your own or if you just want to use Other services for this but if you want to be secure and if you have requirements and there are certain fields in the industry that have requirements like We cannot have any of our data get outside of our environment Then you need to go to the painful process and basically do everything yourself Then what we do is we use pie torch Um, and let write with pie torch a fine-tuning script So we take an existing lm and I mean there are plenty plenty of lm's out there. Um the good thing is uh it Local ai supports a multitude of of lm's that you can use and utilize And there's also jammer that basically so you can use jammer from google You can use open lama from from meta and so on and so on and then we have basically our pie torch job that takes The fine-tuning process Uh, so you put everything into a docker container load the data set with the script And then you basically run it inside of a pipeline of kubeflow and then put it out As a gg uf file And then you give it to local ai and then The easiest thing is this is just a connector. So this is just a connector that asks like hey, ktsgpd Please take as an endpoint for the questions that you run inside of your cluster. Please just take uh local ai And all of the questions go there and then you can say like Uh, okay, that's gbt analyze explain and it will analyze your whole cluster with your own local running um model but As I said, this is the beginning. This means this is not the end where we are There is a lot of hassle the main problem that we currently have is kubeflow is Horrible to set up and I mean really horrible. We are waiting for a solution that is like Hey, I want to have like a helm chart and it runs in my class and everything is fine. This is not the case There are some windows that Basically package kubeflow I know that canonical have like a easy out-of-the-box solution The big cloud providers have out-of-the-box solution But if you run it on prem on your own data set It's not fun Um, you need still need to write the pile torch fine tuning strips. There's no pluck and play Um, and also data preparation takes time Um, so this is all the things that still hold us back. You need to constantly fine tune The good thing is everything is so fast. So basically what I showed you today Won't be I don't know. It's it's the same last slide So everything that I showed you today is probably completely obsolete in half a year So when we meet in salt lake city, you say like why did you talk about this? It's way easier now But I think tuning no matter where you run it tuning your Set with your own data is the next logical step to utilize AI because Models are fine. But what we want is we want our own fine tuned models with our data because there we have the knowledge that we can Multiply. Thank you very much