 Hi. Hi. I'm Henk van Bremen. I work for Atlantic Technology. I'm a general manager for departments that is doing computer modules. And here we see the world's most powerful ARM desktop, no? Right. So we started working something like two years ago, three years ago, together with ARM UK on a project that was targeting automotive. Now, one of the interesting things about the project is that they were using, or they plan to use, a processor called the Empire Ultra. That's one of the fastest ARM processors, general purpose ARM processors on the market. So we were very happy with it because at that time, Empire was actually concentrating more on cloud vendors and whatever. They didn't have so much eye for embedded. But together with ARM, we were able to convince them to turn this into a kind of embedded product. So what we made is, well, the result is a desktop actually for Neoverse software development. But this thing is built kind of special because it's based on a COM HPC form factor. COM HPC is a computer module that is being used in embedded edge computing. So having that module, we are now able to take that module out and put it in different kind of applications because the desktop, of course, is not our main business. We're in the business of building edge computers. So this one is running, so why is Empire special? Empire special, Empire Ultra special because it is system ready. There are three degrees of system ready. The lowest one is IoT and the highest one is actually server level. So if you have a system ready system, what do you have? Basically, you've got a system with a bias. So what we can do on these systems, we can install off-the-shelf ARM 64 distributions. This one is even licensed for Ubuntu, but we can just take Debian, we can take CentOS, whatever it runs on the system because it has a very high abstraction. You could put it on a USB stick and click and install. Yes, absolutely. Just like ACP6. So the importance is not only that you can install it easily, but you can leverage the whole ecology that has been built up, the middleware, the applications that run on these distributions, whatever. So this offer now on Linux level is almost compatible to an XET6 offer. So how many cores is in this one? This one is top of the line. So we did that because I also want to show some power consumption. This is 128 cores. So the highest one at the moment. We started 32, 64, 80, 96 and 128 cores. So we see all the cores there? Yes, you can see 128 cores. All right. So what you can see here is actually how much power it draws. Of course, it's not doing a lot right now. It's around 70 watts in rest. So we got the S2 only thing where we can stress it. You see now the cores get loaded. And then you can go back to your power meter and see how much it's actually drawing. So the whole argument for the amp-air solution is it offers the best performance per watt in the world? Absolutely. It's around 30, 40% lower. And especially what I say embedded applications, a lot of these applications are going to be mobile. And if you have mobility, then battery life is extremely important. Another thing is we used to deal with 60 watts, 50 watts in a construction. That's already a very difficult way to build heat solutions that are kind of fendless. So once you go to a higher level, I mean just what Intel and AMD are offering in comparable, you go to 250 watts. This is almost impossible to cool without having fence or whatever. So here, you can even see the... Well, of course, we're using liquid cooling on this thing, but the SoC temperature doesn't go over 50 degrees. So if you have, how do I say, good efficient solutions, these things are easy to cool because it doesn't really draw more than 120 watts. I mean, what you saw is 130, 140, but that's of course the whole system. And what's the solution here for the cooling? Well, it's just an off-the-shelf gaming liquid cooling. There is a standard solution in the market for people who are buying development kits. They can test cooling themselves. So this is just a gaming system. What's happening down there? That's where the power is. It's a power supply? Yeah, it's a power supply and there's a lot of cabling underneath. So people can buy it like this from you? Yes, we sell it in two ways. We sell it as a system. Systems are typically used by people who want to do software development. Because if you want to develop for NXP, you don't want to compile your stuff on an NXP process. You want to have something very, very, very fast. So these are ideal systems to do development even for different kind of arm chips along with the 64 bits. Then we have boxes. We get to that later. And these boxes are purely board-level. So you only get the boards. And these are typically for people who plan to do embedded solutions. So what these guys will do, they will use the standard carrier boards. They will start developing their own carrier boards. I mean, if you go to a medical, they will put DSPs. They will put the FPAs, all these stuff that you cannot buy off the shelf solution. So these people have to make their own carriers with their specific, how do I say this, the IP and the specific knowledge building. And then they click the commerce PC module on top of it. We take in care of the complexity of memory, PCI express, all the high-speed stuff, and that's on the module. So typically, if you get projects of 1,000 pieces, 2,000 pieces, you cannot make single board computers. It's just way too expensive. So in this kind of volume, 1,000 to 5,000, these are ideal solutions because it lowers the total cost of ownership for customers. I've been going to the Linaro Connect since 12 years ago, maybe. And all this time there were 3,400 engineers, some of the best Linux and ARM engineers in the world. Correct. And they all used Intel laptops for the work, all these years. And they were all clamoring and demanding a solution like this. So this is perfect for them, right? So we got people who bought the systems are from the main OS distributions. The red is working, the sushi is working. Because when you develop on ARM, you want to be on ARM. Yeah, of course. You don't want to get out of that environment. Even online compiling, I mean, it's available. But to have something on your desk, especially, it's noise-free. You don't hear anything in the systems. And it's doing on a 20, 130 watts. And it's full load right now. Yes, it's full load and 50 degrees on the SoC. It's quiet. I'm not like an airplane. Sometimes some server solutions are very, very noisy. Now, what I said about system ready, right? In the back we got the proof of concept. So we're running your video. But the most interesting thing is we're not running this on Linux. So let's see what we have here. We're running this on Windows 11. There's a Windows here. So you thought it was difficult to install this? Actually, no. It was quite easy. We just use UUDP. We download the standard, how to say, ISO for ARM Windows 11. And we just straight install it on the system. Just like a USB stick, just to behave in XCB6. Because I saw on YouTube a guy, he was having trouble to get this done. Right, right, right, right. Because he was working on the... Because we have two, actually two versions of Ultra. The old generation, the current generation and the newer generation. That's called the Ultra Max. So basically we don't have AMI bias. We have EDK2. That's an open source bias UEFI level application. That is the same expression as... So when he received the system, MPI was very proactive and shipped in the highest. But we were not ready with the bias yet at that moment. So now we are. So this is 32? How many cores in this one? This is 32 cores. All right. So this is considering that you do this without Windows, Microsoft not having joined this yet to support or anything, right? What we're right now, we're doing, we're doing HLK testing. To see how compatible this thing really is. I mean, we already know that there's some drawbacks, right? So Windows 11 for ARM, how is it being used? It's being used on Qualcomm processes for notebooks. So there are hardly any mainstream or desktop level drivers available for this kind of system because there's no market for that right now. So even USB, Ethernet and everything, you have to figure it out? USB is working, right? But for example, the internal Ethernet that we're using is just a normal internal Ethernet controller. There is no driver for that because it's not being used in the current Windows 11 ARM ecology that is purely network based. So if we want to get Ethernet, we have to just get a USB dongle on the back right now to get this. All right. The most important thing is these things are basically for, also for AI. So the most important thing is that these things get paired with NVIDIA cars. We can put two of these big pieces in the systems. What does it do? The two by 16 slots here and you got three U on both of these slots. So you can put two, three slots, two, three slots cards on these things. How good does it work with the NVIDIA cards? It works. On Linux it works very good. CUDA is fully supported and Ampere is doing a lot of work. And I think they work closely with NVIDIA to qualify these things that YOLO CUDA, everything is working on these things. Now we're looking of course to Windows because we would also like to have to work on Windows. So that's where we probably need some help from NVIDIA to get these drivers to work on. Because the last two or three years Apple has been doing really cool laptops, MacBooks. Yes. And there's always this rumor that maybe at some day they will do Mac Pro like a big desktop. And they sometimes sell them for $5,000, right? You've got a co-youtuber, Jeff Healy. He said that. He said this system should be built by Apple, not by us. Because the desktop system based on ARM are going to be a trend that's for sure. I mean, not just the average person, but yeah. Because for example, if you're a developer and you need to compile, this is going to be way faster per watt than any other solution. And the fastest ARM in the world. And then I wonder how many other high demand applications can be accelerated right now on this. Could I? Well, it's AI inferencing. And what I say, we did a project with ARM UK. This system is actually was built for ARM UK. They have an initiative called SOFI. SOFI is a containerized cloud infrastructure for automotive. That means they will put simple tasks in every container, specifically for automotive. Of course, ARM is very interested in that to get people on the Neoverse platform. So it's already exist because these things, why these things are so good in cloud? Because they have a lot of cores. And their performance is very predictable. You know that if you go to Xcd6, the core count is not that high. But they use a lot of tricks to get their performance up to par. That makes the system as such very unpredictable. These systems per core are very deterministic. You always know what performance you get per core. And in Xcd6 system, that is not so clear. So for containerized computing and computing for automotive, you need more predictability. So there they come in very, very good. So when they're developing maybe a self-driving software or some kind of smart, automotive, future solutions and everything. In the end, the car will consist of two systems. One thing is the automotive processor that is actually handling the machine. Then we're talking about lower level ARM things. The other thing is the awareness. The camera input, the processing and acting based on that. So that's kind of the big brains. Small brains is immediately interfacing with the engine, controlling the engine. And the big brains is what you see and what you hear and how you process and how you accelerate. Also, that are those things. But also, as I understand, the ARM is really fascinating in the way that they build the SoCs to have accelerators for many different things. And I guess the day Apple is going to do the Mac Pro, they will include video encoding, GPU, everything on the SoC. This one not. It's purely only delivering cores and memory. If you want to do acceleration, you will slot it in here with a GPU or maybe some kind of video encoder, acceleration kind of card. Yes. And this is the way that could be done. Yes. It's already being done like that because that is what the Sophie Armor UK product is about. They add these cards for inferencing and things like that. And maybe, who knows what happens in the future with the Empire One and all this stuff. Maybe you're going to see, because it's going crazy with the cloud, right? So this is very compatible with the whole cloud market. Another interesting thing is that it's going to be a very disturbing development in the embedded industry, in all the industries. In two years' time, you cannot ship any products anymore without secure software stack. So that is going to be very, very disruptive in the market. So people who have built up their software chain, I mean, in medical or whatever, they have to rethink how they're going to implement software because it has to be secure. We see that most of the security is going to happen on containers. Containers also let you easily, how do you say, transfer applications, even in between XCB-6 and Armour or whatever, because they're kind of independent from the hardware. So this system already has that stack. We work together with Foundry's I.O. Foundry's I.O. comes from Linaro, right? So it comes directly from, how do I say, just Arm UK, Linaro, Foundry's I.O. Foundry's I.O. has something like, they call it a factory where you can hook in your devices. They will take care of updating. We are also in the middle to take care of updated software and you get OTA delivery of your software. Big security means updating. I mean, it's just like an Android phone or whatever. And this is going to be very disruptive. And that's where Arm can dive in, I think. That's the moment. But the Foundry's I.O. in theory, they can provide forever security updates extremely fast, extremely secure, safe to update and never have issues with security. The most important thing is, it's an agnostic system. It doesn't say it's only for Arm. It's also for XCB-6. It's not tied up to a vendor. We cannot provide, how to say, security updates by ourselves. Our scale is not that big. You need something like an operating system that is hosting applications and that is agnostic. So Windows is that, more or less, Linux is that. So you need a kind of an organization that's doing it. And Foundry's I.O. is, at the moment, one of the only ones that has a solution to it where everybody can join in. So as I understand, this Computex was so cool with the opening keynote with Jensen Huang. And then his company is a trillion dollars now. And it sounds like there's so much demand for AI, acceleration. The industry is going completely bananas. I don't know, we're here in Taiwan. And probably some of your neighbors are making a bunch of this stuff, right? Micro in the back here. So this is fully compatible with this whole rapidly growing market of cloud design, right? I don't even like to use the word. I just call it a box with a GPU card. I mean, we're shipping boxes with GPU guys for ages. Now the software level is going to change because they're going to be used for a different purpose. But basically on a hardware level, it's just a box with a GPU card, actually. It's just a gaming system. But the industry, as I understand, is the demand. I don't know if we can draw it on the wall here, but it's really going up in a way that I don't know if they can deliver towards the demand. There's so much demand, right? This is fully compatible with all these cloud server systems, right? So when people develop for any of the cloud servers, they should work here. They can work here. But honestly, say Amazon already has a kind of infrastructure for MPR where you can, without buying one of those, you can already do some work on an MPR course in the cloud. But I feel that nothing better than to have one on the table with this. So the more professional you are, the closer you want to be to your hardware. Of course. You want to have one, right? And who knows? Because the way you work is with the industrial, but also... Well, mostly industrial. These systems are kind of promotional items. First of all, we want to bring awareness in the markets that ARM is available right now. You can have it at the desktop, and you can build it into your embedded systems. I mean, it's not something in the future anymore. It is there, and it's just as convenient as we use from XCD6, and that's why I said this is where the system ready stuff comes in, because most of the distributions are supported off the shelf. So this is for awareness, right? And then we got some other boxes which is pure board level. That is for the real developers. How do you see the demand on the market for ARM servers? So typically, I will tell you typically, application. So this is automotive. Don't think that if BMW decides to use ARM in their cars, they're going to go to Adlink. Come on, the scale is totally different. But at the moment, there are already a lot of companies who are making test equipment for AI, and that means they go with a car on the road, they have a big box in the back, and they record the roads. They take the data back to the laboratory, and then they start doing simulations and whatever. Okay, this testing equipment, the scale is whatever, 2,000 piece, 3,000, 4,000 piece per year. This is where ARM comes in, and these systems come in, because these guys, they're going to build something with a massive amount of SSDs or whatever. We already see that. We already talk with customers like that for using an ampere ultra in the embedded market. And when people develop, let's say, self-driving software for a Tesla, let's say, which is using an ARM SoC on the Tesla. Yeah, they could use it. As long as you're an ARM, everything is compatible, right? There's an interpreter. Your software is better to be developed here. As long as ARM CH64, then you're good with these things. All right, cool. Do you think that it would be in your interest to bring this to consumer market to make it even huge? We have to be reasonable. I mean, I think it's a company that has a model. We kind of model this low volume, high mix, right? That is a business model. If you really want to go into the high volume, you basically have to change your organization. I would say somebody is going to step in in one year or if they're going to build a system that is way cheaper than we build it. By the way, we build it with two PCBs because we have a different purpose. We want to move this module out of it and put it into embedded applications where customers build carry boards, specific applications, specific carry board. That is the purpose of this one. And of course, that makes the system a little bit more expensive. So it's very easy to undercut it on price. But the thing is, nobody had to stick a neck out for this development, and we did that. With the help of ARM UK, but still we said, yeah, this is the right direction. How long did it take you to get to here? Probably two years. Two years? Yes. And you have an amazing team here. We're on a weekend, so they're not here right now. Yeah, we just moved to the new building. I mean, you saw it, right? We've got one of our own. With how many floors here? Ten floors. Ten floors. And all is ADLink? It's all ADLink. This is how to say it. The computer module will be used. We get software guys in the back here. All right. And upstairs, we've got data acquisition, machine automation, departments, whatever. And R&D is spread all over. They mostly sit together with the business units and use the departments. So if people buy 10 or 100 of these, they get a discount? Yeah, yeah. Sure, sure, sure. We're selling. You saw it, right? We have a concept site called iPad Wiki. It's not even called ADLink, but to sell development kits. We do it for smaller things and we do it for this one. And you pay for credit cards and we ship it in two weeks or whatever. Easy. No need to talk to sales. No need to have stories why you want to buy things. You just use your credit card just like AliExpress. Hello. I'm MrBeast. No, I'm not MrBeast actually. But if I was MrBeast and if I was sending you a bunch of money, I would use Wyze. Wyze is a really smart way to send money around the world. Tiny little fees. Check out my video, a seven minute video where I try to explain some more. It works in hundreds of countries. Every time you go to a different country, use your Wyze card or use your Android pay, your Apple pay to do all your payments with a tiny little conversion fee. If you have some customers in different countries, they can send you money to local bank accounts in the US and Europe. All of the world, you can get local bank account details. They transfer tiny little fees. Don't use PayPal anymore. Don't use Western Union. Don't use your bank to send money because it's surprising, but you wouldn't know maybe, but they take fees that are gigantic, that are pretty big. Just use the Wyze. It's smart.