 All right. So here we see the Ampere motherboard. Yes, this is the second generation Ampere motherboard which supports up to 128 cores and we have around 8 DIMMs. Total we have around 16 DIMMs and we have an OCP 2.0. So we have all these PCI cards for graphics and then we have the 10G LAN on board and the management board. So Ampere as you know it's a low power and it's being used in multiple data centers right now. All right. And you also have some, what are these like? These are add-on cards from Broadcom. So these are all OCP 3.0 types. So we have 1G, 10G, 25G and we have it in PCI form factors too. These are HBA cards from Broadcom and these are Broadcom rate cards. All right. So these work with this? Yes, they work perfectly fine with this. And then you can put some extremely fast GPUs on there right? Yes, we can put those GPU cards. We are compatible with NVIDIA too. So if you want more details I think we can go to see the HPC side which will be more interesting for you. Thank you after. So this is signed by Mr. Jensen from NVIDIA. So this is the CPU GPU, the Grace Hopper which is recently announced today. So this is a very interesting ARM cloud server accelerator to bring forward the AI of this GPU right? Yes. So this is a CPU GPU plus and memory. So it has a memory of 96 GB HBM3 memories in built. So this is the future of fast computing. Nice. And somehow it has fast connectors here? It has N-villing connectors, yes. And then they have their 72 core ARM processor, single socket with a GPU. And then they can have two? No, we have four. There is a two-year phone server. If you go down, if you come back and then you see the mirror, you'll see four trays, one tray, two trays. In the mirror you can see those four trays. So those are all portable trays? Yes. And if you come here you have another two NVIDIA ARM CPUs. So that one is a CPU GPU and this is a CPU CPU. All right. So 72, 72. Yes. 144 ARM cores. ARM cores multiplied by four. All right. It's a lot of performance in there. It's a lot of performance there, yes. So we managed to do all the cooling with these fans here. This like an airplane taking off? Yes. When you power it on, it really sounds like a conquer. It's just going to take off. But when we started, it runs at full speed. But the RPM will slowly reduce according to what heat is being utilized. And they talk about 960 gigabytes of memory capacity per module. And somehow all this is connected and you have 900 gigabytes per second and we link. So somehow the whole thing gets lots of memory bandwidth? Yes. The whole thing gets a very, very lot of memory. And also to that they have the NVIDIA Bluefield 2 and Bluefield T compatible PCIe LAN cards. Nice. And here we have more Jensen Huang signature. Yes. So he said this is the future of AI. So these, what am I looking at here? This is really big stuff. Yes. This is big stuff. This is HXM-5. If you want the details, it's here. It supports with 8H100s, XXM-5 GPU servers. These are modular designs. These are linked with MV links. This platform is on two Intel sockets. Separafit. And AMD, we also have AMD on this one. It should be coming sometime. It's ready. It's already up in the website. So 128 core on the previous one, half a dozen Goose Luts. Is it sentient, people ask? Yes. Is it going to take over or we still have controlled humans, right? It's a joke on my chat. It just depends, you know. They are good people. They are good people. They are bad people. We are all good people here, right? We are all born good people. We are all born good people but you know, it's very... What do we see here? The inference specialist. These are inferencing servers. We use NVIDIA, Xilin V17 cards to do inferencing. So this is one of the most dense servers. We have 2U. We can put 16 single slot cards as you can see. So these cards get inverted into this one. It's cooled by 1, 2, 3, 4, 5, 6 fans. And this is a max you can get in a 2U server. You can get max 16 slots. The highest core density with AMD Epic. Yes. So these are the latest AMD Genova CPUs. There are 4 of them. So it's a 2U4 node. So in total you will have around 8 CPUs. Nice. Alright, so... I'm guessing as I understand the NVIDIA keynote and it's a huge demand for this. It's growing so fast. People are all talking about AI, deep learning, inferencing, chat, modules. So yes, that's the future. So it's all done, the computing process and the GPU process is so big so that it can all be done in one box. If you come here, I'll show you more of an ARM servers. These are the latest ones. These are Ampere 1. Oh wow. It's already working. It's already ready? Yes, it's already ready. It was recently announced. And they're all Ampere 1 around here? They're all Ampere 1, yes. The maximum cores I think it will go up to 196 cores. So what is special about the Ampere 1? Low power. So it's a dual on 5 nanometer. This is dual on 5 nanometer. High core, low power. And it has the most performance per watt in the world. Yes, the most powerful power in the world. We can say that. And it all comes with 3.5 drives and 2.5 drives. So we can see it's all NVMe. We have this in 1U single socket. Nice. And we also have it in 1U dual socket. So to understand the performance increase with the Ampere 1, I need to ask Ampere, right? Yes, I think we might have some NDAs issues. I think it's better if you go and ask Ampere regarding the performances. But what we can say is, they have some white papers, so I think it's good. And then around here, it's always good to have a lot of memory. So a lot of memory connect. You get 16. Is that what it is? Yes, if you have single socket, it's 16. If you have dual socket, it can go up to 32 memories. So this is a single socket. So it is 16. And if it's a dual socket, we have it in 32. All right. So a lot of RAM. Lots of RAMs. Lots of PCIe slots. And these accommodate GPUs too? Or that's not this one? It can, it can occupy GPUs too. But we need to see which GPU. Their PCIe is not for GPU, but we need to go and see which GPUs. Maybe some single slot GPUs, dual slot GPUs, AMD, Nvidia. Maybe some can be more optimized for GPU and other more optimized for web server. We have other servers which has been optimized into GPU servers with MPU. Nice. That's awesome. So you show them here. It looks like it's ready. Is it ready? It is ready. It is ready. So it has been already announced in our website. So people can just go and buy them. You can go and buy .com for more information. All right. People buy. People, they send the money and they can get them. Yes. Is any of these using the ampere? No. No? So we have some. Oh yeah. We have one. This is ampere. Ultraman. What's special about this one? This is liquid cooling. Direct attached liquid cooling. So liquid cooling. And this is 2U 4Node. It's for high performance. And these are liquid cool servers. So you can come back. There is a tank here. So whether liquid cooling is done. So we get directly attached to these. We work with cool IT. Wow. So a lot of liquids go there. Is this water or something different? It's a chemical. It's not a water. It's a special chemical that loves getting cooled. Yes. That's good at bringing the heat forward. The heat down. Right. And then you can put all these SSDs. Yes. These are all SATA SAS SSDs. It's so cool. Yeah. And these are OCP type. OCP type. So this is an ampere server. So these are 2U 2Node. This is OCP. The one which Facebook is using. Facebook. Yes. Open compute project. Companies or is it mostly enterprise? We have all sets of customers. We do have all sets of customers. So big tech companies also just buy from Gigabyte. They don't make their own design. We have partners. We do. We have different kind of business models. Once we work with partners. Once we work directly with the customers. Maybe they have specific demands. If they have a big order. You help to customize exactly. Yes. Exactly what they want. Yes. It depends upon application. It depends upon the data center. It depends upon what they need. So we can do some customization for them. And here's. There's also another ampere. So this is an ampere. HGX. HGX. With the ampere. Yeah. This is a single socket. Oh. Sorry. Oh. Is it stable? Yeah. Is this thing full? Yeah. But otherwise it's good, right? Yes. Yeah. So right there. Yeah. Somehow there's a GPU connecting. So big GPU. The GPU goes into the next box. Those XXM modules goes in here. So this is where the module sits. It's empty right now. But this is where the module goes in. All right. Just goes under right there. It goes under right there. And it will be the same for ampere one, I guess. Yes. In the near future. It should be ready. But we still have. Oh, yeah. Sorry. If you can please. I hope still. Yeah. Still one. Hopefully if you can connect on this. Yeah. All right. It's very nice. If you don't mind right here. I saw something interesting there. What are people looking at here? There they are. What's happening in there? These are direct liquid cooling. Emulsion cooling. Direct liquid cooling. Yes. So this is an OCP type. But we haven't standard direct type. So I think there's some liquid there. So I think it's better if we take that picture. So these are. There's a little bit liquid there. And then. These are servers that are completely dipped in liquid. It's completely under liquid. Yes. The whole thing. Yes. Wow. Is that very special design or it's more and more common or what? This is usually a smaller tank. It depends upon how big your power supply is. The power to the data center. So it depends upon that. How big you want to deploy. Mostly for POC concept. So it's into a smaller size. And around here you're showing also like how you make little PCs that goes in buses and trains and stuff or something else. It could be. It could be also using an ampere or maybe not. But that's more like what you showed in the beginning, right? Yes. This is not my part so I might not know much. I will have to transfer it to another sales or a medium who can help me more on the other parts. Thank you so much. Thank you. Hello. I'm MrBeast. No, I'm not MrBeast actually. But if I was MrBeast and if I was sending you a bunch of money, I would use Wyze. Wyze is a really smart way to send money around the world. Tiny little fees. Check out my video. A seven minute video where I try to explain some more. It works in hundreds of countries. Every time you go to a different country, use your Wyze card or use your Android pay, your Apple pay to do all your payments with a tiny little conversion fee. If you have some customers in different countries, they can send you money to local bank accounts in the US and Europe. All of the world, you can get local bank account details. They transfer tiny little fees. Don't use PayPal anymore. Don't use Western Union. Don't use your bank to send money because it's surprising but you wouldn't know maybe. But they take fees that are gigantic, that are pretty big. Just use the Wyze. It's smart.