 It's key, you know, the future of AI is open. So I'm Marcus, I'm from Germany, part of Red Hat doing a lot of technical marketing stuff. And I do talk a lot on stage and I do introduce a lot of people in various contexts but today is kind of a big fan boy moment because I have the pleasure to welcome Sherar and Sid from Red Hat and Intel and they both are gonna walk us through the latest and greatest of what Red Hat and Intel do together with this exciting topic around artificial intelligence. Sherar, take it away. Thank you, Marcus. I will say, even before we get started, I had fun putting together this slide deck with Sid. I think the topic is very interesting and very relevant to where we are today. Lots of things going on in the AI industry and the work that we'll talk about really highlights how this isn't new for Red Hat and Intel. We've been here before and so it's a great opportunity for us to collaborate in which we have been doing. So I'm very excited to talk about what we've been doing together but then also some work that we've been doing independently that support each other. Really quickly, the agenda. I'll first talk through, you know, the open source. Well, I'll give you the introductions myself and Sid and then we'll talk through open source revolution, how open source is critically important to LLMs and the future of LLMs, what we're doing together with LLMs, both Red Hat and Intel and what Red Hat's doing with large language models and generative AI. And of course we'll round this out with what Intel is doing in this space. Just a little bit of background. I'm Sherard Griffin. I've been at Red Hat for about six and a half years now focused primarily on data and our AI and the ability for us to build products that help customers infuse their applications with AI. This goes for OpenShift AI, I head up the engineering for that as well as our open source community open data hub. And then more importantly, I work with great partners like Intel to round out our ecosystem to ensure that our customers have the best choice of both hardware and software for their AI initiatives. I have with me Sid Kolkani from Intel. He has a phenomenal background. He's been delivering AI and data solutions in three different verticals over the past two decades. Anything from retail, finance, as well as ad tech. And Sid's currently the VP and GM of the data and ad platform at Intel responsible for delivering the AI vertical apps and several verticals as well as optimizing Spark installations. So I'm very glad to have him here and I'm actually gonna go ahead and kick it over to Sid to get us started on open source. So good morning and good evening, everyone. And thank you, Sharad, for that great interaction. Today I'm gonna talk to you about open source revolution. So I think you are already familiar that open source in itself is a revolution which started somewhere around in 90s and just it took by the storm. The reason it is a revolution is because it promotes collaboration. Obviously it does provide transparency and security. The shared problems are solved much faster and teams collaborating, communities collaborating and working together creates standardization. There are over a million open source projects that are now created. However, with the advent of LLMs, open source is even more relevant because of transparency of both the model as well as the transparency of data and the whole sensitivity around transparency issues and security issues. Next slide, please. So who else is actually saying this, right? So if you look at this interesting memo from Luke which recently leaked out of Google, he says that we have no mode, neither does open AI. Open source is going to outpace any closed AI systems and is already doing so in several waves, this memo argues. So as you're fully familiar, LLMs actually took the whole world by storm around November of 2022. Since then you have seen significant innovations and multiple companies making multiple announcements every month or every two months. This week itself, Meta and IBM are starting an AI alliance for leading technology developers, researchers and adopters collaborating together to advance open, safe, responsible AI. This includes about 50 members along with heavy hitters such as AMD, Dell, Hugging Face, IBM, and of course in Dell followed by Oracle, FastAI, Stability AI and NASA to mention the few. More recently, you have also seen the European legislation that ultimately included restrictions for foundation models but gave broad exemptions to open source models which are developed using core that's freely available for developers to alter for their own products and tools. This move could benefit open source AI companies particularly in Europe. Let's go to the next slide. So let me walk you through a little bit of my experience as an engineering manager in building products and building product strategies. So previously, before the advent of the LLMs, typically you start a project, you start it small, you start putting it together with available open source components to keep the cost down, to run certain POCs and then you kind of make a buy versus build decision. If you are creating an application that captures the nuances of your business and want to give you a comparative edge to you, you probably decide to build it in-house but if there is a solution that is available on the market, especially as a service model, I think you would normally choose that as a solution. So that is kind of the process I would go through while deciding buy versus build. Whereas today in the advent of LLMs, it feels like it is turned by 180 degrees. So typically first now people start experimenting with Blackbox LLM services to keep the cost very low but then after doing some experimentation and using the data in the open source as well as using some of the fabricated data, then you start thinking about your own data and making your application more and more relevant and leveraging your enterprise data. Then the real data questions come in. Can I actually upload my data to a service? Am I risking my data? Is the service transparent? Can I customize the model or service? That's when you start actually bringing a model in and possibly fine tuning or doing a rag on the model so that it becomes specific to your business and delivers you the comparative edge. Next slide please. So with that I hand it over to Sharad. What Sid mentioned is something that we hear many customers facing that challenge of I wanna get started quickly with my AI initiative. Why do I do so? And a lot of times that barrier to entry is oh, well, let me just call this service and send the data even today in a meeting, the conversation was brought up, well, can I just use this service and infuse AI into our processes and develop our code faster? Well, what we're trying to highlight is the challenge and the difference in the Delta which you see now with LLMs and how companies can pivot to having a more sustainable infrastructure to not risk the data and not risk you have security issues that you may have when dealing with a lot of these pop-up AI services where you don't quite know what they're doing with your data. Are they using your data to retrain? Are they using your data for other purposes? So if you need to have that infrastructure in-house in a way that is approved by InfoSec, improved by IT I wanna talk you through some of those requirements and what that looks like. For most companies that they first start playing around with LLMs and chat GPT and now they're getting to the stage to where they're done they're kind of done playing around with those concepts and they want to build products and services for themselves that take it to the next level. This is where they have to transition to really seriously investing in the AI projects. You may have had something run on your laptop but now how do you roll that out into production at scale where the infrastructure is critical to the success of that especially if you wanna protect your data and run it in-house. When you're doing that there are a lot of stakeholders that have to partake in this exercises. Now you may have one or two people having a myriad of these personas but regardless, we typically see this as the cycle that has to happen in order to get an idea from that early stage into it's been invested and infused and now it's ready to roll out. This goes anywhere from your business leadership to provide the right criteria and requirements for you to design your systems around. The data engineer, the data scientist, and the ML engineer is working really closely together where the data scientist may develop the model but the ML engineer is responsible, think of it almost like an SRE type of role. They are there to usher that model into production and provide capabilities like roll forward, roll back and ensure that metrics are being collected. But even more critical to that is the application developer. If you roll a model out into production but you don't have an application that rounds out that experience then for all intents and purposes it may not be all that useful. Imagine chat GPT without the application that went with it and so that's the criticality of the application developer's role. They need to understand and have the right ability to work with the models and infuse their applications with AI. And across this, every single team member plays a critical role in this complex problem of getting things to production. And when you start to look at this holistic view this can be very expensive if you were to try to replicate some of the cloud services and Sid will talk about the cost of a lot of this down the road. But what I wanna highlight here is you have to start thinking about if I wanna run these projects and go from ideation into a seriously invested AI project you have to think about the cost of CPUs, GPUs, memory and even what it could represent if you're using cloud, cloud services, EC2 instances, Google Cloud, that has an inherent cost. So these are all the things you have to start thinking about when you're delivering the solutions for this. Now, this is where I'm excited because the work that Intel and Red Hat have done together helps you optimize on the execution of the models as well as the cost in order to ensure that you're building on a system that is scalable and you can support long-term. One of the key parts about this is the industry has moved away from, at least when you talk about how you can apply this in your own local environments they've moved away from a lot of these large language models that are billions and billions of parameters some of them 100 billion parameters very large sizes and the gigabytes of sizes they've gone away from that and they've started adopting more industry-specific smaller foundation models that can be tuned and you can do inferencing on it with a lot less hardware. Now you combine that with some of the innovation coming out of Intel and Red Hat then you have a really powerful ecosystem. So if you look at the left side of this I wanna talk a little bit about what we've done on the training and validation side. We have great technology that allows and all of this is OpenShift AI and what I'm gonna talk through and it all runs on OpenShift. On the left side of this what you see is a lot of technologies that are geared towards being able to do distributed processing of the data whether that's where you need to fine tune it and you need to use all of your infrastructure to do that or optimize the usage of that or even down to I just need to train a model how can you do that in a way that allows you that scalability, that elasticity even in your own data centers. Then we have technologies like MCAT which allows you to optimize the queuing as well as the priority order of your training jobs that are running. If you have 10, 15, 20 data scientists all needing to train the data you need ways to optimize that. We also have InstaScale that allows for dynamic scaling up and down of the clusters. So if you have a job that's more resource intensive you can pass it the information that you need to scale it up and down and it'll save you on resources as the job is done it's gonna scale that infrastructure back down for you and instantly save you a cost. Now on the right side which is more on the tuning and the inferencing side we've done some great work with Intel they have the AI analytics toolkit as well as OpenVINO. Those are fantastic frameworks to be able to do more with limited hardware. That goes for anything from Edge to just you don't have GPUs when you need to do CPU and still get really good performance those are great frameworks for that. For example, let's say you don't want to necessarily purchase GPUs for some of your inferencing and you wanna do things like quantization or sparsity those are great frameworks that allow you to do that and get GPU like performance but on a much more commoditized hardware. And you couple this with the smaller more specific foundation models that it can be very powerful. Some of the other things in this space that's fantastic we've also been working on the hardware side with Intel and as well as the kernel level to optimize performance. If you do need GPUs you can have that with what they're doing there. Also fantastic work with the Xeon they have upcoming capabilities there. Each release is adding more cores to the CPU which is phenomenal if you want that more CPU workload. And then of course we have Habana Gaudi 2 which we work fantastic with Intel and all this is done open source. All of these technologies are coming together to help application developers to help MLops engineers really optimize on simple hardware how they can adopt LLMs and foundation models into their ecosystem. Now I talked briefly about OpenShift AI I wanna make sure I hone in on this point. We are focused on the AI industry along with Intel but this is all being done out in the open. And we have a lot of upstream projects CodeFlare, Ray, Qflow, K-Serve they're all feeding into our open data hub upstream community project. And then that is what we have is our downstream product with OpenShift AI. And when you think about it there are very few companies like Red Hat who have not only learned about open source but they've survived all the myriad of changes that have happened and we've thrived in open source. So with open source Intel and Red Hat it's in our DNA it's what we've done. Intel has decades of experience in this space going all the way back to being a key contributor to Linux. And so we've both grown from open source we've lived in open source communities and we're pushing the envelope even more in terms of how we can bring more innovation to the AI space using true open source patterns. And when we talk about OpenShift AI what we're really trying to do is provide that platform that unifies the data scientists and the application developers. We allow, we give a platform for customers an enterprise grade platform for customers that allow them to develop, train, serve, monitor and manage the life cycle of their AI and ML models. And then we're using OpenShift as that environment because OpenShift's already the platform that application developers know they love it. They work in Kubernetes it allows them to scale up their applications. And so giving them, giving application developers that AI platform that's in the same ecosystem the same kind of patterns, the same ideas of providing the ability to manage your applications we're applying that to machine learning and what we're doing with OpenShift AI is bridging that gap between ML ops and operationalizing your models and the challenges there we're bridging that with what we're doing on the DevOps side of things and providing that consistent platform to be able to do that. When you start to look at how you piece this together I mentioned the application developer I mentioned the applications that they build and then I mentioned the data scientists and the ML ops and the models that they build. The importance of all of this is because we're moving towards a world where every application needs to be intelligent. It's now the new differentiator for apps coming out. If you are building an application and you're not thinking about how AI can be infused into it and it can augment the benefits to a customer then that's something that your competitors probably are thinking about. We already know almost every C-level executive they're asking these very same questions. We know we need to do something with AI but we don't quite know what and a lot of them are questioning well what is the right avenue to go down? Can we even do these things in a way that protects our data provides the capabilities that our customers are looking for? And so when you look at OpenShift AI and the work that we've done with Intel it allows customers, it allows developers, partners to be able to do the entire data science process and accelerate their time to market. And so on the top part of this you have the cycle of building out your models. You gather the data, you do some fine you grab a model from the open source it could be hugging face could be somewhere else and you wanna do something with that. You wanna be able to fine tune that model do some rag augmentation on that and then deploy that model into an environment that's sustainable. And the data scientists in the MLOps engineer they're gonna cyclically go through and ensure that that model is retrained, retuned for the new data that comes in. Now on the bottom side of this you have the application developer which has a very similar cycle. They develop the code, they go through QA they deploy the code into production and then they monitor it as well. Having that same consistent platform to be able to do all of this work that accelerates the time to market. So now you have OpenShift and OpenShift AI with all of its capabilities providing the right automation for you to be able to do this at scale and do it quickly and really break down the barriers for the application developers to interact with the data scientists. It's one cohesive platform. And so we have all the tools here to be able to help you get your models into production faster, help you reduce the costs with the work that we've done with Intel to optimize the infrastructure that you have and then make it to where the data is the differentiator for you. Having a platform where you can add your own data to the open source models be able to fine tune on your data be able to do rag on your data. That now makes that model that you've done this with the differentiator for you. It's different than if you just take a model from upstream and you immediately add that into your application and there's no differentiator. With these tools, with this framework and this platform, you have all the ability to be able to really create that differentiator experience. And so now I wanna pass it back over to Sid. He's gonna tell you about all the fantastic things that have been going on with Intel in this space and all the innovation that they've been a part of. Thank you, Sharad. Let's go to the next slide. So I like to actually start by saying that, Intel as a company, we are committed to a vibrant open ecosystem for developers, which includes compute, pervasive connectivity, cloud to edge infrastructure, artificial intelligence, as well as sensor. Now, when we talk about open ecosystem, what we actually mean is it includes open source software, open source hardware, open standards, open specifications, open APIs and open data models as well. So like I said, we promote openness, we promote choice and we promote the trust. Intel has done significant contribution over the decades to the open source and the open source community. So Intel has over 20 years of investment across hundreds of independent projects, over 19,000 software engineers. It's number one Linux kernel corporate contributor since 2007. It's top 10 contributor to Kubernetes and over 300 community managed projects are run by Intel, 700 member foundations and standard bodies, six architectures supported in one API, 700 GitHub projects. Intel is also the Chrome OS leading contributor. So significant contribution more than 20 years over more than 20 years with that, I would like to go to the next slide. So this is the Intel vision actually to bring AI everywhere. So what we mean by this is to empower every user at every level by providing multiple technologies at all the layers where they touch. So let's start reviewing this from the top. So right from the large to small, the goal is to unlock the AI continuum with novel applications. The next layer from training and fine tuning all the way to inference and deployment, streamline the AI workflow with AI software that will be made available in the open source. All the way from cloud to client, including the data center, simplify the AI infrastructure with scalable systems and solutions and then create silicon offerings from AI specific to general purpose, all the way from Gaudi data center GPUs, that is Xeons, Arc and Intel Core. That is the Intel vision of bringing AI everywhere. Let's go to the next slide. So when we double click on that high level vision conforming to the similar layers that I just reviewed, this is the whole offering that Intel actually provides and most of these libraries and frameworks are actually in the open source that are available for use. But please don't get overwhelmed with it because they are meant for certain specific purpose. I will go over at the high level but not get into each framework and each library. So these frameworks actually span over client, edge, cloud, as well as data center. Let's start from the bottom layer, which is the foundational software. The foundational software includes firmware, bios and simulation, operating environment in kernel, virtualization, orchestration of edge, as well as cloud native. I'll point out a couple of important ones in each of these layers. So in the operating environment in kernel, you obviously see our partner here, which is Red Hat, prominently featured there. In terms of virtualization, we support multiple different softwares, including Kubernetes and VMware. When it comes to languages, frameworks, tools and libraries, we support multitude of tools, languages and frameworks in libraries such as SQL and one API by Intel as well as one CCL. And then in the AI realm, we support large number of libraries which are actually optimized by Intel, including NumPy, Python, PyTorch, TensorFlow, Cycadline, BigDL and so on. The topmost layer is the solution services and platforms. So Intel runs several developer programs and resources. I encourage you to go to the Intel website and take a look at them. In terms of platform services and solutions, there are multiple platforms and services as well as solutions that are supported. In terms of AI, Intel has created multiple things such as OpenVINO as well as Sigopt along with ConvergeIO. Let's go to the next slide. So there is always a debate between the specialized AI models versus the large foundational models. There is of course a cost associated with it which I will cover in the next slide. So the advantages of going with a large foundational model are they are incredible all in one out of the box versatility for text, programming and textual language learning as well as plain text summarization. Surprisingly compelling outcomes it provides you but there are certain challenges. The challenge is that they are very big over 100 billion parameters to provide that versatility and therefore they're very expensive. The next slide talks a little bit more about the cost and we'll go into detail but it's about 4 million to train 3 million per month for inference. The large model also hallucinates and has lack of explainability as well as intellectual property issues and they're also frozen in time because they're trained on a particular sample that was taken in a particular time. Whereas if you come at the domain specific models the advantages are they're 10 to 100 X smaller models while maintaining and improving accuracy. They're economical on general purpose compute like Sharad mentioned. Many of them you can actually leverage the CPUs and significantly reduce your cost instead of using the GPUs. In terms of correctness the source attribution as well as explainability is available and it utilizing private it can also utilize private or enterprise data. You can actually find you and all do rag on them to be more specific to your business. They could be continuously updated with new information. There are several challenges as well. Some of the challenges are reduced range of tasks because they are not so big and they are not so versatile and it requires a few short fine tuning as well as indexing. So let's go to the next slide with this. I talked about the cost. Although this whole area of LLMs has seen significant revolution there are advantages but there are limitations as well and one of the biggest limitation is actually the cost. If you look at the training cost, GPT-3 cost about $1.65 million. That's 3,640 petaflop days costs if trained on Google TPU version three where a GPT-4 training cost was about $40 million. That's 450,000 petaflop days over 7,600 GPUs running for a year. Now if you come to the inferencing cost, the chat GPT inference cost is about $40 million to process prompts per month with 100 million active users and as you know, they reached that number pretty quickly. If you look at the Bing AI chatbot cost, it's about $4 billion. Bing AI chatbot to serve responses to all of the Bing users. So the cost is humongous for these large language models. And with that, let's go to the next slide please. So as I've discussed before, the specialized models actually enable scale and they're extremely useful in multiple verticals. I will cover some of the verticals and their applications here. For example, in the education vertical there could be use cases such as teacher assistant, student study buddy, or parent chat portal. In the health vertical, it could be drug discovery, doctor assistant, and patient family chatbot. In the finance vertical, it could be algorithmic trading which is becoming very popular, customer portfolio assistant, and the risk or credit assessment. In the retail vertical, it could be product promotion, customer interface and sentiment tool, and image shopping aid. In the government sector, it could be government services assistant, document search and summarization, and live language translation. In the energy sector, it is energy consumption forecasting which is extremely important, and then operational performance as well as energy trading assistant. In the automotive sector, it could be autonomous car development which has caught the imagination of the public, multi-language in car aid, and supply chain optimization. The supply chain optimization is applicable to multiple industries, including retail. In terms of manufacturing, it could be factory automation, predictive maintenance. Predictive maintenance is a very important field in which it could be predictive maintenance of the shop floor equipment. It could be trucks, or it could be airplanes as well as other vehicles. It could be precision agriculture, and in terms of telco, it could be personalized customer service, network automation, and operational performance. With this, let's go to the next slide, and I would like to hand over to Marcus to discuss the hackathon and the results of the hackathon. Thank you, everybody, for your attention today. Have a good rest of the day. Thank you so much, Sharad Sidd. That was a pretty impressive presentation, and I enjoyed every second of it. Thank you again for joining us today. I will let you go, literally kick you out of the stream. I always wanted to do that with general managers. So thank you again. Goodbye. And while I remove everybody else, I do add my old friend Jeff back. Jeff and I, we basically do the same things. We do technical marketing. We talk to people. We look brilliant on screen, and we have the honor to not only talk about our technologies, but also talk a little bit about the past and the future. The past, because we're talking about the code shift hackathon, about all the amazing submissions that we got and the future, we are gonna talk about the winners of the code shift hackathon. So with that, I'll add your amazing presentation to the stage, Jeff and you. All right. Looks like it's up. Okay. Thank you very much, Marcus. You know, I heard Sid and Sharad talk about the future of AI. And I think what you heard a lot is you heard the word developer a lot in creating applications. And I think it was Sharad who said that app intelligence is the new differentiator. I thought this was really interesting. So this was, well, I had the luxury and pleasure of planning and executing the code shift hackathon. So the goal of this hackathon was to bring together developers from across the world to demonstrate their skills and also their creativity. Using this Red Hat cloud native platform as well as with Intel technologies. So the hackathon explored, how do you build cloud native apps using Red Hat's OpenShift platform that includes things like Java runtime support with Quarkus and Spring Boot, event-driven architectures with Apache Kafka, integration with Camel, SSO, all these things. And most importantly, developers were given access to be able to build, deploy, and scale intelligent applications too using some Intel OpenVINO and analytics tools. So it was really, really cool to hear Sharad and Sid talk about kind of, I wouldn't say in theory, but all the things that we provide together both Red Hat and Intel from a platform perspective to give to developers. Now, what do they do with that? How do they integrate those models and into these applications for the cloud? So it was really exciting to see, first off, to organize and to see what developers created. So first I wanna talk a little bit more about the hackathon first. So a couple of things by the numbers. The dates where the developers had from September 26th through November 3rd to create an application, you'll notice that's not a lot of time and you'll see what they did was quite impressive with the short period they had. We had almost 500 registrants across 57 countries. In the end, there were 30 submissions that met the criteria, which far exceeded our expectations. We had developers who were full stack, they were back end, they were front end. We had AIML developers. People came from having skills around OpenShift and Kubernetes. We had Java developers. We had all across the board and we had some people from university who actually learned all this, which is flat out astounding. So we are giving away $70,000 in prize money, $35,000 to the overall winner, 15 for second place. Third place will get 10 and we had two runner ups who will get $5,000 a piece. Our criteria was pretty straightforward. We wanted the apps to be innovative, like how innovative were they in terms of creating solutions and products? And actually someone created a developer tool, which was really neat. So what did the UI UX look like? Not every app has to look beautiful. Some do, some don't, but the UI UX was a part of it. Potential impact, like I mentioned impact to end users or it could be impact to developers. And then lastly, how well did they integrate the Red Hat and Intel products like the OpenShift and Open Vino from Intel? The judges were from both Red Hat and Intel and the submission criteria was pretty straightforward. Had to be an OpenShift, or excuse me, an open source application. We asked for a five minute demo, kind of in their own words, supporting documentation, architectural diagrams and such. So that being said, we took those 30 submissions, got it down to, I'd say about, we down selected to about 10 finalists. Really tough to choose. There were some great, great applications. But ultimately we took those 10 and got down to our five finalists, which one of the applications you see here will win money. Everything from, we had a great AutoDocs AI which took, used AI to transform source code into documentation. We had someone do a open, like a marketplace for buyers and sellers. We had someone like Pertas, they created a performance testing SaaS solution for using virtual threads with Java 21. Vino Pharmacy, which utilized AIML to create a whole online pharmacy. And then Open City Hub, which is a smart city management. So these are in no order. So these were our finalists. So let me move a little quicker here. So our runner up was Open City Hub and Pertas, that's the performance testing. Open City Hub, they both will win $5,000. Open City Hub was developed by Kunan Singh. Once again, this is a smart city application that had things like traffic management, pollution management, health management, really neat microservices based cloud native architecture. It utilized, deployed an open shift, used Kafka, Redis, RabbitMQ. And there's a link at the bottom here. You guys don't have this presentation. So please go to codeshift.devpost.com, codeshift.devpost.com. That has, thank you so much, Marcus, that has all of the, all the submissions, not even just the winners, it has every single one of them. So great site there. The Pertas developed by Himanshu Gupta is an opinioned way of doing the performance testing using virtual threads. Really neat, like I said, we thought this was so innovative and useful, not necessarily from an end consumer perspective, but from a developer perspective to do automated testing. So really neat, deployed an open shift, used Kafka, Redis, RabbitMQ as well. Next up is the third place finalist that will win $10,000. And I talked a little bit about this, Autodox AI, this was developed by Atabayo Omolumo. Hopefully I got that right, Atabayo. Once again, this used AI to transform code in GitHub into readable human documentation. So it integrates not only with GitHub code, but then also integrates long language APIs to create documentation. They used OpenShift Knative. So it's a serverless application on top of that and obviously integrated GitHub. So really cool technology. Almost want to take a step back and talk a little bit about how the, you'll see this commonality with these finalists that they kind of use all the tools in the Red Hat OpenShift application platform at their disposal, everything from these Java frameworks to serverless and all these great things. So that was really cool to see. Second place in the first two, the first three, the first two and three were quite amazing. Second was OpenShift Hub. This is a really cool application. So they will win $15,000. It was developed by Vishal Vats. I'm not gonna do it justice. I really implore you to go to codeshift.devpost.com to look at this. So Vishal created a marketplace that essentially connects buyers and sellers, but it's much more than that. He included services for not only creating an e-commerce site, but there's a browser extension that does things like, uses AI to do visualizations of classification for images you may see in a web browser. And he integrated a WhatsApp bot as well. So really cool stuff. I kind of try to put a scattering of all the screenshots where you can kind of see what this application does. One cool thing that I liked about this application or we liked about this application was how it also had analytics for the seller to look at their distribution for what users were buying things and what categories were they looking at. And it actually integrated some ideas around strategizing what to do and sell next. So excellent job, Vishal. It was almost a shame this one came in second because the last one was so good, to be honest. So with that being said, the winner of the codeshift hackathon is Vino Pharmacy. Thank you so much for that. Ooh, and a little drum in the end. This application, like I said, they all were really good. This particular one was developed by Motha Kumar. Only V, I don't know Motha Kumar's last name yet. So I'm gonna have to talk to him and dig more into this. Really, really cool application. I tried to do a GIF on this one so you can kind of see what's going on here. It's an online pharmacy for doctors, patients, and administrators. It saves the doctor's information in the blockchain. It offers a lot of functionality again. So it puts basically the patient, the doctor together. They can do things like video chat, text chatting. They can upload visualizations of their issues or on the camera and uses AI and intelligence to try to classify that. It allows doctors to create prescriptions and share those out via a barcode. Lots of great stuff here. Once again, I encourage you to please go look at it. With these top two and probably three winners, we're gonna do a series of blog posts to really dig in to their experience using the platform and using how do they integrate these models. So this one in particular used OpenVino extensively. So you just heard Sid talk about it. This is a great example of what developers can do. How do you integrate those models in? So he used models around medicine names and wound recognition, which was really cool. Deployed an OpenShift. It utilized Quarkus, which is a Java framework. Utilized the blockchain. So we thought it was just a really, really great application. So thank you to all the, a lot of people. Thank you first to the participants. DevPost is a great hackathon hosting site. Please look at that if you're interested in doing hackathons are great. Red Hat developer team offered us a free sandbox that anyone can get to. So sandbox is a hosted OpenShift platform. Easy to get things like Kafka, open data science and all the Intel products were there. Easy to use, Quarkus and those kinds of things. So please, if you're interested in playing around with this platform, don't try to install it locally. Go to developers.redhat.com and sign up for a sandbox environment. It's a great way to get started. And lastly, thank you for all the judges who spent time going through these 30 great applications. So congratulations to the winners. You will see more hackathons from Red Hat and hopefully Intel in the future. And thank you very much. Thank you, Jeff. And again, big thank you to Sit and Cherar for spending the time and working us through the combined offering. With that said, I think we have 14 minutes left before we gonna continue in the various different, what is it called, stages? Yeah, I think so. So we have three stages of content waiting for you on top of the hour. So go grab a coffee, but be back on time. We'll be waiting for you. I'll be waiting for you, especially in stage two. And as you can see, I've switched my head over to this beautiful Quarkus hat. We're gonna talk about LLMs and Quarkus, but there's much more to come. So thank you so much for now and see you in like 14 minutes, 14. Bye.