 Welcome to Cloud Native Live. I'm so glad you're here. Here we dive into the code behind Cloud Native. I'm your host today. I'm Whitney Lee, and I'm a CNCF Ambassador and a Developer Advocate at VMware. Every day, every week, excuse me, we bring new presenters to showcase how to work with Cloud Native technologies. We'll build things, we'll break things, and we'll answer your questions. Today, we're going to save all the questions to the end and have a discussion after the presentation. Here with us now, we have Jason Andrews. He's here to teach us about how to get more with multi-arch clouds. This is an official live stream of the CNCF, and as such, it's subject to the CNCF Code of Conduct. Please don't add anything to the chat with, that would be in violation of that Code of Conduct. Basically, just be nice to each other, be nice to the presenter, be nice to me, you got this, we can do it. Friends who are joining us live, please say hello in the chat and say where you're from. I love how we're a global community. I just think it's the coolest thing ever. If you have any questions, please post them to the chat and we'll get to them at the end. Then, with that, I'll hand it over to Jason Andrews to kick off today's presentation. Jason, tell us about yourself, please. Yeah, I'm Jason Andrews. I work at ARM, so I work in a developer system organization, and we do lots of activities to help software developers work on the ARM architecture. Awesome, that's super cool. Thank you so much for sharing your time and your expertise with us today. Yeah, no problem, glad to be here. Thanks everyone for joining. Cool, are you ready for me to share your screen? Yes, let's go ahead and we can get started. Excellent. Okay. All right, everybody, I'm just gonna start off today with a couple quick slides, just to give you a little bit of an orientation of what things are and what we're gonna talk about, and then we're gonna spend the majority of our time with some hands-on activities around multi-architecture infrastructure, primarily with Kubernetes today. That's what we're gonna talk about. So as I mentioned, I'm coming from ARM in our developer ecosystem group, and one of the reasons we're here to talk to you today about multi-architecture is that the ARM architecture is ramping up and gaining a lot of attraction in different areas of cloud computing. So you can see different cloud service providers have been adding ARM instances and virtual machines and a lot of managed services, which all run on ARM. So as a developer, you might wanna take advantage of those. So yeah, I guess first point is ARM is growing. The architecture is spreading in many different kind of places. And then you might ask yourself, well, why? I mean, what's the benefit for me? So as a developer, there's probably three main things that you wanna think about here. One is performance. Second one is gonna be reduced cost. And the third one is sustainability. So lots of projects are looking into the ARM architecture as a way to really benefit from better price performance as well as built sustainability and the resources that they're using. So that's really, I would say, the main reason why people look into the ARM architecture. Price performance is a big one. It's always better to run fast and cost less. So that might be a reason why you look into the architecture. So the good news about the ARM architecture is lots of things already work. So lots of software already works. In fact, I would say most software already works. But as a developer, if you've been working in a single architecture for a long time, this might be a little bit new to you and how to migrate and how to think about having multiple architectures in your environment. So that's what we're gonna talk about today. So I'd really encourage you, if you haven't spent a lot of time with the ARM architecture or anytime, the best way to get started is to just get a machine. There's lots of ways you can get a virtual machine, whether that be on your laptop or in the cloud, there's different places you can do that. And then I would just ask you to just kind of experiment with your projects, your dependencies, what kind of things work. Maybe you find something that doesn't and just kind of dig in. I mean, for most people, that's the best way to get started. It's just set up a VM, dig in there and figure out what's the same and what's different about running on the ARM architecture. In terms of what some other things you need to think about, so maybe some dependencies, you got to take a quick scan, look over your libraries, frameworks, runtimes, different things that you use in your projects and see if they're supported on ARM. In most cases, most software is already working. I mean, all the Linux distributions, a lot of the tools you use probably are already there. It's not a big deal. But typically what I encourage people to do is just kind of make a little bit of a checklist or investigate some of your issues. You might see a container that doesn't work on ARM. Okay, I'll just put a note, make a note of that and keep that in my pocket as I continue forward in my journey. The other thing to point out is newer is better. That's not always the case in software development, but because the ARM architecture is newer in server and cloud, you probably want to use the latest newer software possible that will typically help you. Couple other tips before we get into the hands-on part. If you're running things that are interpreted languages, it's usually very straightforward. Everything is typically the same. If you're doing things like Java, you might find that the performance improves significantly on newer versions. So I'd encourage you to try newer versions if possible. Another little bit of a pitfall is sometimes you'll find shared objects in jar files that are architecture specific. They're compiled images, compiled objects. You've got to work with those because they won't run in the wrong architecture. Containers, tons of containers are already multi-architecture. Not always, but a lot of the things are. And then on the right side, the compiled applications. You're going to look for things that are not real portable and C and C++ could be intrinsics, things like that. Some other languages like Go works great and again newer versions are better. So starting from 1.18 and newer, that's going to give great performance. If you have older ones than that might work, but the performance isn't going to be as good. So yeah, take a look at what you have, work through your dependencies, keep a checklist of that. And yeah, that's a good way to get started on ARM. So in terms of some scenarios, I mean when developers want to try out ARM and check into the price performance benefits, typically they'll run into a variety of different things. I just made a little table here with some kind of things that you might come up against. I mean, they're fairly specific, but it gives you a flavor for what you might face. So if you're doing something like Node.js application, you might find, hey, just works. I moved it over. I can't tell anything different about the architecture and its underlying instructions, works great. If you're in C++ applications, you might come across Intrinsics, which are architecture specific. Typically these are CINDI type instructions and there are tools and libraries that help you to migrate those over. So SSC to Neon and CINDI everywhere are examples of tools you can use that are very minimal in terms of your source code changes and allow you to run those existing Intrinsics in your code. You might find some dependencies that don't work on the ARM architecture. So you might find something in Python or Python package that's not available. So again, try to either you have to work around it or best way is to ask the maintainers if they can add ARM support. I mentioned some performance things. So yeah, newer tools are better. Sometimes the crypto extensions of the ARM architecture haven't been implemented in the best performance way on older versions of tools. So take a look at that containers. Again, if you find containers that don't support multi-architecture, ask the maintainers if they can do that. Couple other architecture specific things. So large system extensions is a feature of the ARM architecture that will normally just work and everything is fine. But if you had say an older C++ compiler, you might have some issues with that. Anyway, so you kind of get the flavor that most stuff is there. You might hit some hiccups here and there but definitely reach out if you need help. And I think you'll have a pretty smooth time. So what we're gonna do today is get into the hands-on and we're gonna have a little bit of multi-architecture discussion running in Kubernetes. So we have an application that we're going to build. We're gonna build container for that. It's a go application. We're gonna put it into container repository. And then we'll work with AWS EKS and we'll have a mixed node cluster where we have a couple of nodes which are AMD64. And we'll have a couple of nodes which are ARM64. And I'll give you a little flavor of how containers map onto those nodes and can start up different kind of deployments and see how that works. So that's what we'll go over next. Okay, let me jump out of the PowerPoint here and we will get started. So essentially what I'm gonna do is first, let's just go over the application itself a little bit. So it's super simple. I mean, it's just a go application that is gonna respond. And I probably wouldn't even use the browser. I'll just do a curl and hit the application and then you'll see it says it prints a hello message and it'll print a node that it's on and the pod and the platform. So really this is the key thing. This is the architecture value. So if this application and its container lands on an AMD64 node, that's what'll be printed. And if it's running on an ARM64 node then you'll see that is printed as well. So that's the basic application we're gonna go through. Now, if you're not familiar with multi-architecture containers, this is kind of a prerequisite, I would say, to get into Kubernetes. So you probably want to learn that a little bit. Now, what I've done here with the simple go application is I have a single Docker file which can actually build a few different ways. And I would encourage you to look into multi-architecture containers. But essentially what we're doing here is we have, we're passing arguments for the target architecture and then also here for the runtime container which is gonna be Alpine in this case. So it's copying in the go and then it's using this variable to compile the application for a specific architecture. Now, whatever machine you're on when you do this, if you don't specify the go arch variable it's gonna be whatever the native architecture is of the machine. Now, things are getting a little more complicated as people start picking up Apple Silicon because you will be picking up ARM64 as that native thing. So it depends which way you go and if you just say from GoLang, that's gonna be the native architecture of your machine. Now, the runtime container which is the second one down here, the second stage that's the one that needs to run on the architecture we're gonna schedule it on. So that's why you see there's a variable there which is going to place whether that's AMD64 or ARM64. Now, in terms of building the container there's a couple different ways to do it. If you just use Docker build I have an example here of a script which is just Docker build and then I tag the containers with either colon ARM64 or colon AMD64. And then I can push those to Docker hub or whatever container repository you wanna use. So this is probably if you have a traditional container you haven't thought about multi-architecture before you just build it on an AMD64 machine that's what you get, right? You get an AMD64 container it can only run on AMD64 nodes. So, let's just say for example, if I can build this application both ways I'll do a Docker build just straight build and I'll pass in the architecture in each case and then I'll pass in that runtime container for Alpine has a slightly different prefix on ARM and then I'll get two containers there. So what it ends up when I do that I have here in my Docker hub account you'll see I have two different containers. One has the ARM64 tag and one has the AMD64 tag and I built those independently just showing you how I did it. Okay, so that's like a way that you can just build containers that are architecture specific. That's kind of the historical way. Now the new way to do this is by way of buildX. So buildX is a ability that Docker has to build with a single instance you just call Docker buildX and then you specify the platforms that you wanna support in this case, AMD64 and ARM64 and then it will build that as a multi-architecture container. So if I jump back here on Docker hub this is the one I have go-archX. Okay, so the tag I just put a 1.0 is the tag but if I open it you'll see there's two dropdowns here. There's an AMD64 and then let's see. Can we zoom in on that please? Sorry to interrupt. Can we zoom in? Yeah, no problem. Yeah, great, thanks. Okay. Yeah, great. So this one is a multi-architecture container. So if I do the dropdown on the OS architecture there you'll see there's two entries. One is AMD64 and one is ARM64. Now the beauty of the multi-architecture container is that whatever machine you're on when you go to run the container the container runtime will go to find it and it'll say, oh, I'm on an ARM64 machine I see that this image supports ARM64 and it'll go get the right image and then just automatically run it on that machine. Okay, so this is a way just to keep everything simple support two architectures in a single image. Okay, now there's one more variable that I'm going to show here which is a Docker manifest command. I believe this is still in experimental in Docker but it's been there a while so I'm not totally sure but what Docker manifest does is it takes your individual images that I described at first for each architecture and then it kind of like merges those together into a multi-architecture image. So if I went on an ARM machine and I built the ARM image, stuck it on Docker Hub and I went on AMD64 machine, built that image, stuck it on Docker Hub, then I can merge them together and then I end up in the exact same place that I would as if I use buildX. And this is handy because in a lot of cases you don't want to go through a lot of the hoops related to buildX you'd rather just use two separate machines to build for each architecture and kind of link them up at the end using this manifest command. So all that gives you a little bit of background on multi-architecture images, what you're going to need if you go into a mixed node Kubernetes cluster that we're going to do next. So hopefully that's clear. I have some examples how that works. So I'm not going to run the commands for the Docker builds right off I just showed you they are on Docker Hub now and then let's get into the Kubernetes part and see how this works in terms of running the application in our cluster. So in terms of the cluster that we have I mentioned is going to be EKS. So what I found is super easy to create a cluster with EKS CTL. So that's what I used here with two different node groups. So there's one node group which is the AMD64 architecture with two nodes and then there's one which is a C6G that's the Graviton processor. And so that's going to be with the ARM64 architecture. So we're going to have two nodes of that in each one. And then I ran this with EKS CTL with this ammo file and that sets up the cluster. Okay. So what we can do is let me just run get nodes there. And then you'll see here's my cluster I have four nodes running. Okay. So now what we want to do is think about maybe some mixed node scenarios. So let's just go with the AMD64. So I have a ammo file here which is going to be specifically only AMD64. Okay. So it has a container which is the one I just showed you on Docker Hub and it has that AMD64 at the end. So this image is only going to run on AMD64. I guess the other interesting point is the node selector down here. So by matching this node selector to AMD64 when I run this, apply this ammo file it's going to get that container and it's going to run it on one of the two AMD64 nodes here. So let's just do apply dash F and then the AMD. And so this will create the deployment and let's do a get pods. And then you'll see now I'm running one AMD64 deployment and I believe I have a one replica right here. So that's running on that node. Now I have a little bit of a script here or I can just do a curl probably and hit that end point. So I think I showed on the first picture there's a load balancer in front of these nodes. I can hit that and then you'll see it prints the CPU platform is Linux AMD64. So that's the only thing currently running in my cluster. So every time I do a curl I'm just going to return AMD64 every time. Now that might be kind of your legacy situation or you may have some containers that only support AMD64. So that's how you can use the node selector and schedule those on your cluster and yeah, everything is fine. The same thing can be done for the ARM64 architecture. So I have an equivalent file which is pretty much exactly the same except for the container now has the ARM64 tag here at the end and then the node selector is for ARM64. Okay, so if I take that one and then let's do an apply batch F with ARM64 the way it goes and then let's go back and get the pods. Okay, so now I have another one running. So I've got one of each, right? Cause I also had one replica there. So yeah, the node selector can be used to take architecture specific containers, put them on the right nodes and then you can use your cluster that way. Now, eventually over time, what you may want to get to is the multi-architecture deployment. So in that case, we're going to do this one. So what I've done here is I changed the image to be the go arch X. So that's the image that I built with build X. It supports both architectures had that tag of 1.0 and that's pretty much it. So in this case, what we have is six replicas here and if we apply this one, it's going to schedule six replicas with some mix of AMD64 and ARM64 and the right nodes will pick up the right container images and then it will just run. So let's apply this one. That'll be created. And then let's do get pods here. Oops. Okay, so now what we have is the six nodes running that I started and then those two older ones. So you see one AMD, one ARM and then six multi-arch. Okay, so now I have a script here, let's check. We're just basically going to go hit the endpoint kind of in the loop and then let's see what happens. So if we hit that, you'll see you'll get some ARM64, looks like we're getting lots of ARM64 there and you'll see then some AMD64 is mixing. So as you go through, it'll schedule, it'll go to whichever nodes will react to respond to that and you'll see the architecture is printed and you'll see some of these are coming from the ARM deployment. Some of them come from the multi-arch deployment. Didn't see the AMD, here's one up here, AMD64, their deployment. Okay, so that gives a pretty good feel for how to work with the multi-architecture cluster. We have two different node groups. Yeah, in the end, if we have containers that can run on either one, they can just run identically and they can be used on either architecture. All right, so by doing this really, what you can do is maybe make a migration that's over time where you start off with your AMD64 containers. You can add it in ARM containers, see how that works. You use the node selector to schedule those and then over time, if you want to phase out some of the AMD64s for that price performance benefit, can certainly do that, but it gives you a way to kind of ramp up without all or nothing and just try pieces of whatever your bigger solution is. Okay, so I think that pretty much covers it. We went over the image creation, how to build it, mixed architecture cluster, two different node groups, and how we can schedule either the individual architecture containers or the multi-arch on the cluster. Okay, so maybe we'll wrap it up there. We can take any questions if we have them. Amazing, we have a lot of questions. It's gonna be a good discussion, I think, yeah. For one question, just to lay foundations, like you said, it has price performance and sustainability, it seems so amazing. Like are there any drawbacks that we should be aware of? Yeah, so in terms of drawbacks, I mean, it takes a little work, okay? So, in order to get the price performance advantage and sustainability, you kind of have to do something. I mean, in a lot of cases, let's say new hardware comes and it's faster and it's cheaper and you can take your exact same software and just move it onto the new hardware, no work. I save money, it goes faster, that's ideal. In this case, it's not quite that easy because we have to deal with a new architecture sometimes. So, yeah, there's a little bit of a minus in that you might have to build your containers for multi-architecture, it'll probably complicate your CICD a little bit as you ramp this up, but hopefully the initial investment you make to get onto the ARM architecture is gonna be worth it in terms of the price performance and sustainability. Cool, and then for people who are total noobs like me and they wanna play with this, you said that there are ways, but what's a super easy way for a total noob to set up a multi-architecture, at least an ARM architecture situation to play? Yeah, it depends kind of what you have access to, I guess, I mean, there's lots of ARM development boards. I mean, even if you're using a Mac with Apple Silicon, you can even use Docker desktop, which has now built-in Kubernetes feature, and if you do that, you'll basically have a one-node Kubernetes solution on your laptop that's ARM64, right? So you don't have to do anything special. There's other ARM development boards that you can use and install various Kubernetes distributions and so on and use. Cloud is probably the easiest. I mean, all the major cloud service providers now have ARM. The one I probably would highlight if you're super new is Oracle Cloud. So they have a free tier, which gives you access to ARM servers. These are from Ampere, so it's an ARM server and basically you get four virtual processors and I think it's 24 gig RAM, always free. So you can just start up even more than one VM. You could have a dual core VM, two of those with plenty of memory and you could build your own cluster very easily. So yeah, just check all your cloud service providers have ARM instances now. Super cool. So I'm gonna get to audience questions. We have a question from Alexander. Alexander had a lot of great questions. What workloads are okay for ARM architecture? Could it be Java, native, or others? Yeah, all of those. Yeah, so there's no real limitations in terms of workloads. Yeah, and all of those things should definitely work. It can be, they can be interpreted languages, can be Java, can be compile things, CC++, Go Rust, yeah, all the popular languages work. Excellent. And then we have, this one is more for me, I think. It's a, oops, I lost the one that's for me, but what kinds of workloads would be best to fit to deploy in these multi-arch deployments? Could we see inconsistent results from each type? That's a great question. Yeah, you definitely can. I mean, it depends a lot. The workloads depend a lot on the underlying hardware and there's different instance types. So when ARM architecture kind of first came into the cloud, AWS was really the pioneer. They've been deploying ARM instances since 2018, basically. And so they're on the third generation of graphics on now. And it depends on the workload, so you can get it. So in this case, kind of a workload that's priced, the price is better web serving or others that are really kind of scale out. And they're not, you know, performance intensive in terms of, you know, floating point or other things, because yeah, they're able to respond very quickly with lots of traffic. And that was really where your best benefit was obtained, I would say in most cases. But over time, you know, performances keeps getting better and better, so even if you have more computational intensive workloads, you know, you get into like the C7G family from AWS, those are gonna have very good performance as well. So again, it's really just kind of finding the instances that have the right hardware for your workload. And depending what it is, you'll get a good result. Excellent. And this is the one that was for me, and it's a compliment to you. Being able to see it in real time makes so much sense, which I totally agree with. Will there be a recording to watch this later? I'm unfamiliar with the LinkedIn side, but I know that this is also streaming on YouTube. CNCF has a YouTube channel. So if you go there, go to the live tab, you'll be able to see a recording of this. Thank you. And then let's go for what other ARM-based architectures are possible? Okay, so the question isn't totally clear in that the ARM architecture has many versions, okay? So we're on actually version nine at the moment. And it started 33 years ago. So that's kind of a wide time span. But for the most part, what you're gonna find in cloud service providers and generally all other hardware, you'll find a recent version of the architecture where everything is 64-bit. And they're based on typically like the ARM Neoverse N1 processor or the V1 processor. So these are kind of server class processors that are running. Now there's lots of other ARM things that are targeted more for edge computing or embedded or if anybody knows Raspberry Pi, that's kind of one of the early versions of the ARM version eight architecture that's still out there. Tons of people use Raspberry Pi's. There's 32-bit versions of the architecture that go back in time. So yeah, there's a lot of derivatives over time but what you'll find now in kind of server and cloud is like new 64-bit architecture with high core count and good performance. Excellent. So I just wanna make a quick note that this, all the, it sounds like the demo that you showed today. If people wanna get their own hands dirty with it, they're able to do that on GitHub with this URL. And I put it up during the stream too. So get to play with it. It looks amazing. We have a question from Kyle. How well does this integrate with parallel processing? Yeah, I'm not totally sure about that. I mean, I don't think there's anything different about architecture that is anything different in terms of parallel processing. So most applications, you can write applications with multiple threads running on multiple cores and so on. You can build a cluster with lots of nodes. Yeah, I wouldn't say there's anything special about what we're doing here today that's any different than whatever you do with parallel processing today. Excellent. We have questions rolling in as fast as we can answer them. I really enjoy all of this conversation. So thank you. Yeah. What developers prefer keeping arm-based physical box on the desk? What are the options if you want one in your hands? Yeah, that's a good question. So there's a variety of things I mentioned. You could have laptops with the arm architecture. So that's available now. In fact, there's Windows laptops. There's even Windows DevKit, which came out, but probably I'm thinking that's not super interesting to people here. So Linux is a possibility. There's a bunch of boards and different kinds of things. Oh, maybe I have one. I don't know if you can see the camera. How fun. Okay, here's one, which is a Kadas. I can see that Kadas edge two. Ooh, nice. Cool. Yeah, so this is an eight core arm. I think it's Cortex A76 with pretty good size memory. You can get large memory in there and you can run Linux and do that. So yeah, that's definitely a possibility. So I think if you start looking around, you'll see lots of different arm hardware. You probably just didn't notice it or pay attention before. Cool. I have a question. So you went through like how to make a multi-architecture container image. Is there any reason you wouldn't want to make your container image work in a multi-architecture way? No, I don't think so. I mean, I would definitely recommend it because it gives you more options, right? Then you can try a different type of hardware, add different nodes to your cluster, see the performance you're getting, look at the price and make those decisions. But the only reason you wouldn't is if you had some software that couldn't run on the arm architecture. Maybe you have a dependency of some kind or some container that you can't control or change. I mean, the one place I have run into some troubles is a lot of applications have layers of containers and some of them you don't know where they came from. I can't trace them back to the origin in order to build it for arm. That does happen sometimes, but not a lot. And that's actually kind of related to Kunal's question, which is did jars need to be compiled to be architecture specific? No, Java itself is independent, but sometimes you can do native code in Java, right? So you can mix Java with native code and there's a JNI as an interface to call that native code. So that's where you could run into a problem. You could think you, oh, I just have a Java application, but really you've got some native code embedded in there you might not have known about. And then how do you end up finding out just by the error messages? I don't fail, yeah, I don't fail, for sure. Yeah. Let me get back to the questions. What is LSE, large system extension? Okay, so yeah, large system extension was an enhancement to the arm architecture somewhere in version eight that makes it more efficient for handling atomics, right? So the original, as ARM grew up, the number of cores like in your phones, for example, those are all ARM cores and most people probably have eight cores in their phone. And when you're doing atomic instructions and locking and so on, the way that it worked was fine for eight cores but then somebody built a server with 64 cores and then 128 cores and now I don't know what the largest ARM server, 192 cores maybe. And the atomic instructions weren't great so they were improved in the architecture. And so when you compile your application, these things are built into your libraries typically like your underlying libraries in Linux or whatever your runtime is. And if those are updated to use these better atomics, you can get a much faster performance. So when ARM was newer, you'd see a developer move over to ARM, they would run their application and they say, oh, it's slower than I expected. And that could have been the cause was either their libraries weren't updated or their compiler wasn't updated and they ended up with some kind of older atomics which were had slower performance. I'm not quite sure of this question and maybe you can help me figure out. Docker images can have multiple tags like version and architecture. So I guess I don't understand the question. Two different tags simultaneously. Do you, does that make sense to you? A little bit. I don't know, is my screen still sharing or no? It's not, but oh, no, it's not. I don't have a way to turn it on. But if you share it, I can, okay. That's fine. I mean, yeah, in Docker, you have the ability to use tags to specify whatever you want it to mean. I mean, some people might put the version as their tag, like version one, version two, or Golang 1.18, wherever the 1.18 could be the tag or you can put the architecture there. I mean, that's pretty much totally up to you as a container creator how you use the tags. Now, regardless of what you do, if you look like in Docker Hub, I showed that pull down, it will record the architecture that the images are for kind of independently of the tags. So yeah, a lot of people use the tags just to kind of denote architecture, but it doesn't have to be, it's totally flexible. Cool. And what are the long-term plans against ARM64, Cloud and On-Prem? Oh, long-term plans, yeah. Maybe short-term plans too. Do you know of any future plans for ARM? Oh, sure, yeah, there's lots of stuff happening all the time. In fact, you know, Cloud service providers are constantly announcing, you know, new hardware. There was some events recently, so Google Cloud Next was a few weeks ago and they announced new ARM instances. So they had, you know, originally some instances already in Google Cloud and now they announced some new instance types with, you know, better faster hardware, better price performance. I mean, the message is pretty much the same. The hardware gets better and price performance improves and yeah, there's always new instances going in from Cloud service providers and new servers being built as well. I mean, if you just look at server hardware, you can buy off-the-shelf servers with ARM architecture. Yeah, they constantly get better, faster, cheaper. Nice. Can, I don't know what the HPC acronym, but can multi-arm work with HPC applications? What's HPC, please? Yeah, HPC is high-performance computing. Okay. So yes, definitely HPC works with ARM. And in fact, for a couple of years, I don't think it's true anymore, but the Fugaku Supercomputer, which has won the performance crown for the biggest or fastest supercomputer, was based on the ARM architecture. So it's definitely used in HPC applications. Yeah, everything works the same. You just need to build your software for the ARM architecture, build containers, kind of just how I showed. Yeah, and HPC software is typically like very large-scale applications. So like your weather forecasting application or something that requires big hardware and large software, yeah. Excellent. What vendors produce production-ready Java, JRE and JDK for ARM64? Oh, that's a good one. Yeah, there's a variety. I mean, you can use OpenJDK. It's perfectly fine. I use that a lot. The other one, which is pretty good is Coretto. It's called from AWS. So they have a Java distribution. And because they're definitely interested in getting people onto the Graviton processors, which are based on ARM, they provide that as well. So that gives you an easy distribution. So again, that one is easy access. It's open source. So I would say OpenJDK and the AWS Coretto are probably the most common. Excellent. And then so I have one more question for you. So those of you watching, if you do have any more questions, now's your chance to get those in. Right now we have one question, which is what are the best practices when preparing for ARM64 cloud deployment? And are there any anti-patterns? What are those? Yeah, so what I would definitely say your best way forward is to start small and incremental. Okay? And like I said at the beginning, if you can just get a VM going and then start experimenting, just in a manual sense of how you build your software and how you create your containers because you're gonna easily spot when things go wrong if you do that. The pitfall is kind of like, okay, everything looks like it works like my CI-CD system is good and my cluster has ARM nodes. And so you just like push the giant go button and expect the whole thing to just magically work. That's probably a bad idea because you'll be digging in log files and wondering why all sorts of stuff is failing and you'll probably spend a lot of time at it. There's actually lots of stories if you just searched the web about companies that have migrated to ARM64 and they give talks regularly at industry conferences. And that's usually what they say is like, okay, I started small with one VM. I started making inventory, trying stuff out, made a few tweaks here and there. And it took me a few weeks or whatever to work through my entire application. But once it was up and going and I looked back, I didn't really change much, just a few minor things here and there have good results. That's wise. And then the second part of the question are you aware of any anti-patterns? No, not really. It's just basic software development stuff here we're talking about. Change one thing at a time and make forward progress. Absolutely. So no more questions came in. So that means it's about time to say goodbye. Is there anything you'd like to say in closing before I say the closing script? I don't just thank everybody for coming. Yeah, definitely give the ARM architecture a look. I just want to point out one more reference. If you want to learn more, we have a website which is called learn.arm.com. Learn.arm.com. It's very easy. It's a place where you can learn. Learn.arm.com. Excellent. That's right. If you go there, you'll see lots more materials. And yeah, thanks again. Yeah, thanks for a great presentation. Thank you so much for sharing your time and your expertise with us, Jason. Thanks everyone also for joining today's episode of Cloud Native Live. It's great. Jason was great learning about multi-architecture clouds. It's amazing. And also learning how to build multi-architecture images was super cool. So I loved, as always, the questions and the interaction from the audience. Y'all are the best. And here at Cloud Native Live, we bring you the latest Cloud Native code, usually on Tuesdays and Wednesdays. There wasn't one yesterday, but we'll have one again next week. And it'll be at noon US Eastern at the same time. And thank you so, so much again for joining us today. Thanks to those who watch the recording and we'll see you again soon. Bye.