 Hi everyone. How you guys doing? Good. All right. I see people are still filing in while they're doing that. Let's do something. Do me a favor and let's all do the hugging face, right? We're not just the name, we're an emoji. All right. Are you ready? Three, two, one. Beautiful. Thank you. Thank you. Thank you. So yeah, I'm super excited to tell you about why and how you can build your own AI using open source and hugging face. Jeff Boudier, I lead product. I only have 10 minutes. It's going to go super fast. So if you have questions, you can email me at Jeff at hf.co. All right. Let's get started. Ooh, I'm going too fast. Too fast. All right. First, I want to tell you a little bit about hugging face, things that you may not already know. Then I want to give you some good reasons why you should use open source to build your own AI and then tell you how. So let's start with six things you may not already know about hugging face. And the first one is our mission. Our mission is to democratize good machine learning. And by good machine learning, we mean machine learning that is built with open source, that is built from community first principles and that is built from ethics first principles. And speaking of ethics first, let's start with that. Our approach to ethical AI is to provide the community with the tools that they can use in their day to day to practice ethical AI. And a great resource for you is to go to hf.co slash ethics. Our society and ethics team gathers the best resources, the best tools available to you so you can detect biases in your own data sets so you can draft ethical charters in your work. Number three. Number three. Hugging face is one of the most prolific and popular open source developers today per get star ranking number 21 in the world. You may probably know the transformers library which is how people access transformer models. But there is a vast ecosystem of purpose built libraries to do things like accelerate models with optimum to do things like train models very efficiently with pefts to do things like distribute your compute workloads with accelerate. So lots of tools out there for you to build your own AI with open source. Number four. We host over a million repositories. These are models, data sets and AI applications and over 400,000 of those are openly accessible models. And it's not just natural language processing processing our text to summarize classified etc. Today these cover every modality of machine learning every discipline. And if you go on hf.co slash tasks, you can have direct access to what models do I need to detect objects in images or what models do I need in order to transcribe speech into text. All of this is under hf.co slash tasks. Number six and last one. All of these models is generating enormous amount of activity and usage on the hugging face platform generating over 10 million downloads of transformer models every single day on the platform again covering every modality of machine learning from NLP to vision audio and more. All right. So these were six things about hugging face you may not already know. And now I want to give you some reasons, five reasons why you should use open source when building your AI applications and features. The first reason is that the open source community is amazing and nothing is catching up to its progress. Right? This is just a few weeks of the top open models, the top open large language models. For some of the most used industry benchmarks to evaluate them. And in just a few weeks in all those benchmarks has been amazing progress. Fine tuning from Lama two to Mistral to new optimization techniques like DPO. It's a huge impact in a little bit of time. So when you build with open source, you build technology that's future proof and you build technology where you're not locked into a particular vendor. The second thing is building with open source allows you to control your costs because a domain where nothing beats open source is the research on efficiency. Like how to run a model the most efficient way possible on a laptop, on a server, on a CPU, on a GPU, etc. These are just four blog posts on the hugging face blog. They're just about optimization techniques to run stable diffusion cheaper, cheaper on it was in cheaper on GPU, cheaper on CPU, etc. Number three, you should build your AI with open source because then you can own the model and then you can host your own model. And that's the only way that you can have truly secure applications where you're not sending your customers information, your source code over to the internet to a third party. And of course, we build our own and we offer our own option for model hosting on a hugging face. But we also build partnerships, collaborations with all the main cloud providers so that you can easily take our models from hugging face and deploy them within their secure environments. And today we have collaborations with AWS, with SageMaker, Azure, Cloudflare, DJX, Cloud and NVIDIA, IBM, Watson X and even on premise with your Dell systems. All right, number four. Number four, you should build your AI with open source because these are models that you can actually control. And I mean actually version control. So every of those million of repositories that are hosted on hugging face are actually under the hood get repository, meaning that every change is tracked. You know who made what change when and there's a unique commit ID for every change that is made. It's publicly visible for public models and it's visible to your team for private models. And number five, you should build with open source because these are models that you can trust but also verify, right. There is a whole gradient of what openness means in terms of model release. And on the most open extreme of that gradient, big code is a great example where it's not just the model that's openly available, but it's actually the data set that was used to train it that is openly accessible. It's also the training code. It's also the evaluation so that everything is reproducible. And when somebody asks you the question, was this part of the training data set? Is this why I'm getting this kind of result? You can actually inspect and respond. You can actually change how things are. So these are five reasons I wanted to give you why you should build your AI applications and features with open source. It's future proof. You're not locked into a vendor. You can control your costs and reduce them. You can host your own models in your own infrastructure for security and compliance. You can control the versions of your models. They're not going to change under your feet. And you can actually trust and verify for the most open models out there. Great. So now how do you do that? Well, here are a few things you can do to build your own AI with open source and the tools provided by Hiking Face. First, let's talk about large language models because it's the topic of the day. Ever since Lama was released by Meta, there's been a Cambrian explosion of open models. Today, over 3,000 are hosted and tracked on the Hiking Face Hub. And here the blue ones are the ones that are commercially permissible in their licenses. So you should always consider the license. But how do you sort through all those models? Well, we have a great tool for that. It's called the Open LLM Leaderboard. You can find it on Hiking Face. It's free. And we constantly evaluated these thousands of models for all the industry benchmarks I talked about earlier. But you can also easily filter for the kind of model. Is it a chat model for the kind of license, for the kind of size, do you want a 13 billion, a 7 billion, et cetera. So a great tool for you to get started. Now you got a model. What do you do? Well, you need to deploy it. We have a great tool for that. It's called text generation inference. It's our production solution to give you max throughput for your models in production. It works out of the box with Mistral, Lama2, Falcon and more. And it's optimized for various hardware, including, of course, NVIDIA GPU, AMD GPU, and we have solutions for AWS Inferential 2, as well as Habanage OD2. All right. So now you know how to deploy a model. But what if you don't want to deal with hardware at all? Well, we have a solution for that. It's called inference endpoints. And you can see that today, on Hiking Face inference endpoints, you already have two clicks away, the latest open release by Mistral called MixTrial. It's an 8 mixer of experts, 7 billion model. And it's already available two clicks away for 13 bucks an hour. If you don't want to use any of those sort of canonical models, you can deploy any model, including the ones that you have trained and are hosting on Hiking Face by just selecting the model, selecting the cloud, region and instance. Great. So now you have an endpoint. You have an API. What else can you do with it? Well, you can offer your users, customers interface to interact with the model. And guess what? We got that too. Hiking Face, Hiking Chat, sorry, on hf.co slash chat is a free app for you to interact with the latest open models. I mentioned MixTrial is in there. There is Falcon 180 billion. There is CodeLama. You can chat with all those models for free using Hiking Chat. But what people may not know is that the actual application Hiking Chat is itself an open source project and repository. You can access fork replicate. It's called Chat UI on our GitHub. All right. So now you have an application, but you want it to be smarter. You want it to understand your actual context and knowledge. And that's where retrieval augmented generation comes into play. So you have an LLM. It's deployed with text generation inference on a Hiking Face inference endpoint. And now you're using the little cousin of TGI, which is text embeddings inference, to take one of those great efficient embeddings model and deploy that on your own endpoint so that when your user asks a question, then you go find the right information in your internal database to give that ass context to ground the response from the LLM. That's retrieval augmented generation in a nutshell. And all the pieces of the puzzle are there. Developers. So we want to make AI more accessible. And part of it is putting AI in the hands of developers. We're doing that in multiple ways. On the front end, there is the HikingFace.js library so that you can run in front of any model hosted on the HikingFace in the browser. And then we have Transformers.js where you can actually run the model in the browser, which is really, really cool with lots of new models available. And also talking about developers, let's talk about backend development. So we built a new framework called Candle. This is all Rust base for developers to build machine learning natively in Rust. All right. So now you have a great app. It's already deployed. But you want to show it to some people. That's where HikingFace space is common to play. It's a very easy way to build user-facing demos of your machine learning models. This is a very popular space called IllusionDiffusion to guide image generation. And every week, we feature the most popular spaces on HikingFace. There are tens of thousands of them. So that was my quick how to build AI with open source and HikingFace. It was quite fast, but I can go even faster. What actually is HikingFace? It's a leading open platform for AI builders democratizing good machine learning. But what does that actually mean? There are three pillars, models, datasets, and spaces. Models are the thing that actually do the AI. We host hundreds of thousands of models, including ones from major companies like Google, Microsoft, and Meta, as well as research institutions like Stanford and open source communities like Eleuther AI. Then there's datasets, openly accessible data used to train the models. And finally, spaces, which let you easily demo and showcase models. A recent example being IllusionDiffusion. All of these pillars together are combined with open source libraries like Transformers, Diffusers, and Accelerate, making it easy to build, share, and use the latest AI models. All of this is free, but if you need more compute or help deploying your own models, we have that too. So if you want to be involved, go to hf.co. See, I could have gone even faster. Thank you so much. That was HikingFace. If all these sounds daunting to you, thank you. If all these sounds daunting to you, we have our great expert support service to guide you along the way. Thank you, Gabriele. All yours. Thank you, Jeff.