 Good morning. Welcome to this session. I am Mohamed Atef, and I am representing MCI. And as you can see, this particular presentation is a joint presentation by National Computational Infrastructure in Australia and Melanox Technologies. So the title is Chasing the Rainbow in Pursuit of High Performance OpenStack Cloud. So before I head to the main presentation, I'll just give you a bit of overview of NCI. What is NCI? NCI is National Computational Infrastructure, and it is Australia's most highly integrated e-infrastructure environment. So we have got a petaflop system, which was the first in Southern Hemisphere. We have got around 25 petabytes of luster storage, which is, again, we have got the largest deployment of luster in Southern Hemisphere, as well as we have got a high performance cloud. So in short, this is what we provide. We are located in Canberra, Australia. And for those of you who don't know, Canberra is capital city of Australia. Most of the people confuse Sydney as a capital of Australia. So NCI provides a full broad spectrum of research. So we have got users from pure sciences, strategic sciences, applied, and industry. You name a field, and there would be a researcher probably doing some sort of batch processing or cloud computing or big data analysis at NCI. Today, NCI is around 2,500 active users who actually log on to our system. So this particular 2,500 is people who have actually submitted jobs in past four months. We have got around 500 projects. We have got eight out of 22 science academy fellows. And our name is in around 1,500 academic papers per year. NCI is divided into two basic components, services and technologies. I am part of the services and technologies. We look after the infrastructure. And then there is another group, which is research, engagement, and innovation. These are the people who actually are trying to optimize HPC applications or working on virtual environments or data collections. So coming to the cloud, this is what we are here for. NCI has been doing cloud computing since 2009. It's a long way back. So initially, we started with a VMware cloud in 2009. We were doing web services at that particular point in time, mission critical, dual-site redundant, live migration via VMware. OpenStack was, I think, not even born then. So that's why we were using VMware. Then in 2010, this was our first HPC in virtualization. We came up with DCC cluster, which stands for Data Compute Cluster. And cluster again, so it's redundant. But we ran this particular cluster, which was partially virtualized under VMware. So it had few components, which were bare metal. And then there were few virtual machines, which were part of the cluster. And it was using 10 gig ethernet, Intel, Xeon, Vesmeer, CPUs. And it was basically used for workloads that were not typically suited for an HPC environment. So we run a supercomputer. And there was a workload which required installing Oracle Database inside a compute node. Technically, if you ask any HPC facility, can you do that? They are just going to refuse, flatly. What we did was, because we had virtualized environment, we were more agile. So we installed Oracle Database inside a compute node and let people do their processing. This was also first time, I believe, in Australia, that we actually oversubscribed HPC facility. So DCC, virtualized cluster, we were using oversubscription of CPUs. The reason being, there were workloads which were not exactly HPC workloads. They were actually high throughput. People were not looking for, my application should finish in 20 minutes. They said, I'm fine if my application finishes in 30 minutes, and if I'm able to run 10 more applications, because they were embarrassingly parallel applications. So we actually experimented with oversubscription of CPUs and it worked brilliantly. Then this particular cluster was, again, one of the first clusters in the world which actually had native luster mounts for the virtual machines. And it was based on Onsis diskless boot for VMs as well. So we were not actually using disks. So we were using Onsis, so it was diskless booting of virtual machines. Then in 2010, again, we had an unnamed cluster. I don't know why we didn't name it. Probably we didn't care. We just called it cloud. So for this one, we actually experimented with EC2. It was first experimental EC2 cloud. We first experimented with eucalyptus. You might have heard of that particular infrastructure. It's just like OpenStack, EC2 compatible. And we just couldn't get eucalyptus to become stable enough. And in two months' time, we saw the light. We moved to OpenStack. And believe me, in two weeks' time, we were production with OpenStack. So our experience with OpenStack in 2010 was quite good, which is against most of the people. You would hear from people that it was not stable, but it was quite stable. Then in 2012, we had another Red Hat-based cloud. And surprisingly, in Monday's keynote, I found NCI's logo when the CTO of Red Hat was presenting. So yes, we partnered with Red Hat. It was an enterprise-grade cloud, predominantly for virtual machines, nothing to do with HPC. The best outcome for this particular OpenStack cloud is that its uptime is 100%. It never really went down. 2013, we partnered with Nectar, which is basically Australian consortium for virtualization. And we made this Nectar research cloud production. So NCI basically is one of the eight nodes in Australia, which is part of this Nectar Federation. The main essence of this federation is that Australian researchers should be able to launch their virtual machines, and they should give them frictionless environment for conducting their research. Our node is consisting of Intel Sandy Bridge 3200 cores with hyperthreading. And we were the first, one of the first clouds in the world, which actually used full-fat, factory 56 gigabit ethernet provided by Malinox. Yes, when we were going production, we were questioned by a number of people. Why are you going with such an expensive network stack and such expensive CPUs? Most of the people at that time were thinking, we should go with 10 gig ethernet plus AMDs. But once we've given production, people saw why we opted for expensive Intel processors and full-fat, 56 gig ethernet by Malinox. So each of these compute nodes, so this whole cloud was based on the experience from our DCC cluster. So in this particular case, we went with SSDs on the compute nodes in rate zero format. So the local storage is very fast. We have access to half a petabyte of self storage, which is, again, part of the same fabric. It's not coming from 10 gig links. It's part of the 56 gigabit full-fat tree. And there is one issue with this particular cloud, and that is we cannot provide luster to this particular cloud due to security models. I will talk about this thing later on. In 2013, we launched our flagship cloud. We called it Tengen. It's a god of scholar in Japanese. We are going with Japanese themes nowadays at NCI. It's the same hardware as that of Nectar, full-fat tree. But we have divided it into two parts. One part is high-density zone, where we are using over-subscription of CPUs. This is for web services or burst reloads. And then there is another zone in this particular cloud, which is high-performance computing zone. It has got one-on-one ratio, so no over-subscription of CPU, memory, or any other thing. Based on RDO with Neutron, and we are on CentOS 7.x, whatever is the latest CentOS, we just update that thing. Architecture for big data. The main differentiating factor of this cloud with Nectar is that this one has got access to luster, because we can control who gets access to luster on this one. Plus, we also have access to SRIOV, single root IO virtualization by Mellanox. I'll talk about this thing later on as well. We are experimenting with bare metal or InfiniBand cloud, so no ethernet involved. It's based on Icehouse. It's fairly old, heavily modified by NCI, and Mellanox, so we are working quite closely with Mellanox on this one. And if we are able to get native IB working, we'll just move tension to InfiniBand. We call it InfiniCloud. And then we are also conducting some experiments with containers. We have used Docker on our cloud, and we are soon going to experiment singularity, which is another concept. It's just a container, but I believe much more suited for HPC. Systems connectivity, this is a very, very important slide. You would get to know why NCI is building OpenStack Cloud, despite the fact that we have got one of the biggest clusters in the world. So we have got Rygen, and then its file system, which is red, Rygen file system. Around 7.6 petabyte of slash short storage, it is 150 gigabytes per second, fastest file system in Southern Hemisphere. But we cannot export. In fact, we do not export this file system anywhere else. It's sandboxed. It's for the supercomputer. Then NCI also has got slash g slash data. So we call them global data. And as you can see, these are again luster-based, around 25 petabytes of usable storage connected via 56 gigabit InfiniBand. And these three file systems, or four or five, whatever we continue to grow these things, they are mounted across our supercomputer, as well as our OpenStack Cloud. And it's also backed up to our mass data tape storage. Why? We want to give people end-to-end solutions. So their workflow, if you are working on a certain project which requires pre-processing of your compute jobs on cloud, you can do so on cloud. Then you can submit your job onto a supercomputer. You get the results. You hop back onto cloud. Do some processing, post-processing. Visualize it so everything at NCI. So you don't have to shunt your data here and there. So this is what I was talking about end-to-end data lifecycle. One of the examples, you get your data into NCI, one-time operation, or you can even generate data at NCI. You do your data management, pre-processing of jobs, submit the job to HPC, get the results, visualize it, use MATLABs of the world using our cloud or virtual desktop interface. A few of other examples that we have, why we use OpenStack at NCI is we are part of Earth System Grids Federation, which is IPCC project. So I guess it won the Nobel Peace Prize in 2003, IPCC3 version. Again, what people have done is they have generated their data on our supercomputer. And now they are exporting this data to researchers around the world using our cloud. And it also enables researchers to do web-time analytics of the data. It's predominantly for climate sciences. Another interesting thing is we have got environmental data virtual labs. We call it climate and weather science laboratory. It's a virtual desktop environment, again, because G-data is mounted across supercomputer and cloud. You can pre-process your jobs, do a remote job submission. You don't even have to log into our supercomputer for this thing. Again, virtual desktops are being used at NCI. It's one of the most utilized features. And people are using it for a number of their tools. We have got a lot of virtual laboratories, which are production, geophysical, geodesy, climate, water laboratory, and more and more are being added at NCI. One of the first uses of cloud at NCI was for this particular project, which is water indentation due to a tsunami. So basically, if there is a tsunami, let's say, or an earthquake around New Zealand, we can simulate it using OpenStack Cloud and get to know about water indentation in Sydney area or any other coastal areas. So this is the architecture of Tengen. Everything is based around IB switching fabric. So I'm not going to go into the details, but yes, we can mount cluster. We have got Ceph. We have got other things. But everything is centered around our IB switching fabric, which is using a melanox 56 gigawit ethernet. Now, virtual labs and other stuff, they can exist on AMDs of the world in Tengen ethernet. Why NCI is using high-performance cloud? So this is statistics from our supercomputer. It's a bit year or year and a half old, but this holds true for every quarter. So this one is for only 43 days, where we actually catered for around 621,000 jobs in 43 days. It's a lot of jobs on a supercomputer. What you can see here in the upper right is number of cores. And this red line is 16 cores or less. Jobs requesting 16 cores or less. In terms of CPU hours or real workload, it's nothing. We are dominated by 256 CPU jobs, 512 CPU jobs, or 4096 CPU jobs. So most of the people have gone parallel in HPC. They don't really want to run, I shouldn't say, puny jobs, which are on a single node only. So this doesn't do justice to a supercomputer. But having said that, if you look at number of jobs, not the CPU hours by these jobs, number of jobs, single CPU jobs are over 300,000. So out of 620,000 jobs, 300,000 jobs, half of these jobs are single CPU jobs. These jobs are essentially for pre-processing or post-processing. Ryjin, or a supercomputer, is basically a batch operating system. If you are using it for post-processing, you might have to wait for one or two hours or three hours for your job to actually get scheduled. And then interactive analysis, they are possible on a supercomputer, but it's still painful. So this is one of the reasons we have gone with high-performance cloud. So for us, high-performance cloud is to complement the NCI supercomputer. Single-noded jobs are not big fun for a supercomputer. So we invest a lot of money in a supercomputer. We want to utilize the network the way it should be utilized. We want virtual laboratories. We want remote job submission, web services, on-demand access to GPUs. We have done this thing. Workloads, which are not best suited for Luster. So probably all of you know about Luster. It's a parallel file system. But one problem with Luster is it doesn't really play well with jobs or applications which are writing tiny IO. So IOPS-hungry applications do not scale at Luster. And this is where we are using CIF or other file systems experimenting with other file systems. Then there are pipelines which are not best suited for the supercomputer. Like the example Oracle database requirement inside a compute node, never going to happen on a supercomputer. But on cloud, why not? We want to do cloud bursting, more on this one later. But the idea being once our supercomputer runs out of capacity, we should be able to seamlessly migrate single CPU jobs onto a cloud. We are, in fact, experimenting this thing with Amazon as well. So that if somebody has submitted a job at NCI, a lot of jobs are there. We are out of capacity. We should seamlessly migrate their jobs onto Amazon. They get their job processed. And the results are then seamlessly transferred onto NCI data store. Student courses, RDMA is essential. You cannot do it on 10-gig ethernet. I mean, now you can do it because most of these providers are giving you RDMA. And the last thing is everything should be centered at NCI. You don't need to shunt your data back and forth. Everything should be at NCI. So a few of the experimental results. I'm going to quickly go through these things. So what I've done is we have compared Ryjin, which is our supercomputer, with Tenjin, which is our cloud based on 56-gig of a ethernet, with Tenjin again, but this time with containers. No virtual machines. We just went with containers. Plus, we experimented it with 10-gig cloud. I'm not going to name this cloud, but probably you all can guess. Point-to-point latency. I'm using OSU. I have state and university benchmarks for this one. It's point-to-point latency. That means there are two virtual machines. 16 CPUs each. But we are only using one CPU or one process per virtual machine. And we are sending data across, trying to determine what is the latency. Lower is better. The last line, the green one, is Ryjin, which is we want to beat this line. And the highest latency came out of, I forgot to remove this thing, AWS. The difference is so huge that I had to use log scale to show these results. But the main essence or takeaway is Tenjin Rocky is quite close to Ryjin, or native InfiniBand, which is a very, very good thing for us. So we further experimented with this thing. This is bandwidth results. Again, 10-gig ethernet nowhere near. But if you see this blue line in the middle, Tenjin Rocky, it climbs up after 32K and matches bandwidth that of normal InfiniBand. Why after 32K? Because we are using Jumbo frames on our cloud, whereas on Ryjin's side of things, MTU is set at 1500, so it's latency driven, whereas on the cloud, because it's for data processing. So we have opted for the MTU size. Then we ran some experiments using a single node job. This is for bioinformatics, which is the next big thing for HPC or in science. What you can see is Tenjin and Ryjin, they are just competing quite close to each other. When there is no network involved, in fact, we are beating Ryjin in butterfly benchmark because of SSDs, whereas Ryjin has got normal hard drives here. So this was quite an interesting result. Then we moved ahead with NAS parallel benchmarks. These are small kernels, which have got different communication patterns. For example, CG is neighborhood communication, and FT is Fourier transforms, which actually use all-to-all communication. In this one, the main takeaway, so what I'm doing is we are comparing 32 processes and 64 processes. And the results are normalized with respect to 32 processes on 10 GB cloud, which is gray. Now, what you can see is 10 GB cloud. It does scale, if you see, from the first gray and second gray, not that much, whereas we have increased the processes from 32 to 64. It's double the size of processes. It should have scaled well. It's not scaling. But if we look at Tenjin, it's not close to Ryjin, in any case, but it's scaling very, very well. So it's order of, let's say, four times faster than a 10 GB cloud, which technically makes sense, 10 GB and 56 GB. Another interesting thing is when we used containers, we caught results which were matching Ryjin. So if you are using containers with Rocky, you are going to get almost native performance. So this was a very, very interesting result for us. I'm going to skip through this thing. These are communication profiles. We then ran molecular dynamics code, NAMD. Everybody loves to run this NAMD code because it scales well. It's a real application. I have decided not to compare it with TenGig cloud because it was just using AMD's. Even with Intel, it was just very slow. So no point in showing that those clouds are slow. The main essence is Ryjin scales quite well. We have scaled it up to 128 processes, whereas Tenjin, which is using Rocky, in fact, it's not using Rocky. I'll tell you about this thing, but it scales quite well up to 64. It continues to scale, but this linearity goes away. So from 64 to 128 processes, it's not scaling that well, but it's scaling. Rocky, on the other hand, with containers, it is scaling well. I couldn't do experiments for 128 codes because I ran out of my test kit. We'll continue with these experiments, but one interesting thing that we found was the best results for Tenjin or our cloud using Melanox Interconnect was not with Rocky, not with Melanox MXM or Yala interfaces. It was with TCP interfaces. So the main takeaway from this particular thing, and we are going to experiment these things in detail, as well, is when you are running something on cloud, you have to be very, very sure what sort of underlying network communication or what we say in OpenMPI byte transfer layer, BTL, you have to use. We got the best results with TCP-BTL, whereas in NAS parallel benchmarks and other, we got the best performance from Yala interface. For bandwidth tests, we actually got the best performance from Rocky. So you have to make sure which BTL you want to use. One interesting thing is Numa. What we have found is this is an in-house application run by one of our guys. He's the developer of this code, and he suggested, my code is not scaling. It's not because of any network interface. It's because of lack of Numa. Please fix it. So I think Numa is now being made available with Liberty Release. I have not tested it. We are on Kilo, but this is actually hurting us. NCI is committed to HPC. We actually want researchers to get along with their research. Ryjin or our supercomputer is heavily, heavily over-subscribed, not over-subscribed, but very, very busy. So there are people who just don't get time. So what we did is we have released some open-source tools which behave like Ryjin. So your job submission script on Ryjin, our supercomputer, is going to run on these clusters as well. If you spin off your cluster on Nectar Cloud, so we have made it open-source. We also worked with Intel and Amazon, and we have released Ryjin in a box using Spot Incidences. Spot Incidences is a fantastic concept where you just bid for your virtual machines. So rather than paying a dollar for your 16 CPU HPC virtual machine, you can just say, I'm only going to bid $0.29. And if the data center does not have that much of load, they make it available. If the data center runs out of this load, they give you two minutes, and they kill your VM. In those two minutes, you might be able to checkpoint your jobs. It's a fantastic concept. Have a look at our YouTube video as well, how to build a supercomputer using AWS Spot Incidences. If you are short in money, it's one of the best ways to get into HPC. Another interesting thing what we want to do is bring HPC clusters instead of bringing HPC to the cloud. Why not bring cloud to the HPC? It's controversial, but it's possible. So we can run containers and singularity inside our HPC cluster. We don't really have to spin it off using OpenStack Cloud. It's just an interesting thing. So my conclusion is, yes, HPC. We are still chasing the rainbow. I don't see my rainbow because there are certain aspects where we are still not able to match the performance of a supercomputer. Parallel jobs run well on the cloud, but is it actually HPC? No, it's not HPC. Because in HPC, people ask me this question often, where did my cycle go? I cannot tell where your cycle went, because the hypervisor just ate it. Another thing is it's good for high throughput computing. So if you want to have a lot of jobs and you don't care about how fast your job finishes, it's perfect. Another interesting takeaway is single noted performance. It's at par with bare metal. Not a big surprise it was there. We wanted to have hardware performance counters on virtual machines. I think KVM has made it available now, starting from Linux kernel 3.3. We have not tested it, but hardware performance counters are must if you want to go into HPC. People will just simply say, I'm not going to run this job on a virtualized cloud, because I just cannot find the characteristics of my job. Bare metal provisioning, all these other things are just technically I'm just ranting about this thing. But one thing that we found interesting was locality where scheduling is missing in OpenStack. Come on, guys. It's 2016 now. Locality where scheduling is a must requirement. It should be network aware. It should be new malware. We just cannot live with this puny or strange greedy algorithm that OpenStack uses by default. People are working on this thing. I saw a couple of very interesting presentations. So that's pretty much from my side. Thanks, Mohamed, for sharing your experience and the NCI benchmarks. So I'm going to start my part of the presentation, which is going to be short, by giving you a quiz. So I want to talk about basically some of my revelation about cloud. So this is a worldwide unit shipment of something, some electronics gadgets that you all know. Who can take a guess of what this would be? Come on. Almost. It's a digital steel camera. So basically, the shipment kind of peaked around 2008. And it started a steep decline around 2011 and 12. So who can tell me what happened in 2007? Yes. So basically, the first iPhone was shipped in 2007 that kind of coincided with the decline of the digital steel camera. So actually, we're seeing a transition. This is just one example. Basically, a transition from special purpose hardware gadgets and appliances and solutions to, in this case, it's a common mobile computing platform, which can be iPhone or Android, et cetera, et cetera. And now on the iPhone, you can actually run multiple software applications that realize the functionality of the special purpose hardware gadgets that you used to have. And they can actually do a much better job. For example, I can load up my Yelp, find a restaurant I like, call them, and find the directions, order, delivery, et cetera, et cetera, all within the same device. And over the years, since the first iPhone was released, the computing power actually increased about 84 times. So the platform is becoming, the common computing platform is becoming good enough to run the majority of your workload. Maybe not all, but the majority of your workload. So this is a similar transition from a special purpose built environment to more of a common computing environment like the OpenStack Cloud. So if you went to the Monday keynote session, Gardner talked about bimodal IT. And this effect is being seen in the HPC space also. So for the HPC cluster, it's really designed to provide high-performance computing and normally accommodates a single tenant, multi-users but single tenant. And it's really built for a large niche distributed workload. And sometimes they are tightly coupled. And you have to use RDMA to basically communicate between the computing nodes. And in terms of data isolation and protection, which is important for genomic research bioinformatics type of workload, it's really poor in the HPC cluster. And for cloud, it's really built for a different purpose. It's built for agility, for quick access to computing resources, ease of use, and experimentation of your ideas and elasticity of your resource pool. And it can accommodate multi-tenant and normally run more generic workload, loosely coupled. But it has really good data isolation and protection. So this is similar to, for example, a digital SLR camera that some of the avid photographers will definitely never get rid of. And an iPhone that has a good enough camera, but it's a common compute platform that can enable you to do many things very quickly. When we look at the workload, it's really not black and white. Web Services run in the cloud and HPC workload run in the HPC cluster. And we see the world more in 50 shades of gray. So the new generation of scientific research computing really has many type of workload that has different requirements on data isolation, on workload affinity, data access, et cetera, et cetera. And the key message of my part is, if you don't remember anything I say, remember this. So no matter where you draw the line, whether it's horizontal or depending on the workload size, Melanox always provides you the highest performance network for your data access and workload communications. So how do we do that? When we move to the cloud, it's not necessarily always true, but it's normally associated with virtualization so that you can run virtual machines and utilize your resources much better and have much better agility. But along with virtualization, there are a few kinds of penalties in terms of performance. With compute virtualization, it introduces a hypervisor. So with bare metal, you're driving down a very fast freeway. Now with a hypervisor, you're adding a Tobus. So you're slowed down, especially with IO performance. And with network virtualization and software defined networking, one of the very popular way of doing SDN is using overlay style. You're having an additional packet header, the VXLan or NVGRE header, to differentiate between different tenants. And not all NICCards and adapters can recognize this new packet format and handle it properly. And that actually brings some of the tedious packet processing tasks to the CPU, such as check some calculation, RSS and large packet offload, et cetera, et cetera. And also it's inefficient network protocols, transport layer protocols for thumb workload. If you look at TCP IP, it's really designed. It started from wide area network, from very lossy and slow links. So it has to accommodate packet loss at the link layer. So as a result, it has a lot of built-in mechanism to handle handshakes and sequence number checks, et cetera, that must be done in the CPU. But there are new generation of protocols that can eliminate this kind of inefficiencies. So how do we overcome these? We use SRIOV, single root IO virtualization, to basically overcome the compute virtualization penalty. So we enable you to bypass the hypervisor and the OVS and have application direct access from the virtual machine to the network NICCard itself. And then in terms of the SDN and network virtualization penalty, we have VxLan offload. Actually, it's not restricted to VxLan. It also supports MEGRE and in the future, Ginev. Basically, if you look at the benchmarks that we run with multiple SDN partners, the light green bar shows basically bare metal VLAN performance in terms of throughput and CPU load. And the moment you turn on VxLan, if you don't have VxLan offload, you go to the red bar. So basically, it's significantly lowered throughput at a much higher CPU utilization. And then the dark green bar shows when you turn on VxLan offload, you basically go back to bare metal performance. So this gets increasingly important when the speed of the interconnect goes up from 10 to 25 to 40 to 100. And to overcome the transport protocol inefficiencies, we use RDMA. So I don't need to explain RDMA in the HPC community. It does kernel bypass and protocol offload. And all the data IO operations can be done in the NICard itself without CPU involvement. And as a result, we run data IO in both InfiniBand and Ethernet networks at 100 G BPS. So for OpenStack, we want to enable you to run at your optimal extreme performance no matter what you choose. If you want to use the InfiniBand, we have OpenStack over InfiniBand. And the ML2 plugin is available since the Havana release. And we do the translation from your MAC addresses to mean that's offered in the OpenStack scenario to GUID. And we do the VLAN to PK partition K mapping. And InfiniBand is inherently software defined. The control plane and data plane are always separated. And in terms of Ethernet, not all Ethernet switches were born equal. So if you look at some of the dominant players in terms of Ethernet switching chips, such as Broadcom, they came from more of the campus and carrier Ethernet space where some kind of loss is acceptable, because the upper level protocols can handle it. And the Melanox Ethernet actually came from InfiniBand for the data center scenario, where we have very stringent requirement on performance and on packet loss. As a result, for our latest generation of Ethernet switching chip and Ethernet switches called Spectrum, supporting 10, 25, 40, 50, up to 100G, we have zero packet loss at any packet size, I mean down to the 64 bytes, smallest packet size. And we have low and consistent latency, as you can see on the bottom. Our latency is consistent from 64 bytes up to the jumbo frame 9K, as opposed to the Tomahawk chip, which gives you about 10 times of latency when it gets to the jumbo frame. And also, we have very good microburst absorption capability. So this is really important for data analytics type of workload, when you have in-cast. We can accommodate almost 10 times better microburst compared to the other switches based on Broadcom. So with that, I want to close. And if you have further questions, go visit us at boothstand next to the rock band. And I'll invite Mohamed to come up if you have any questions. Three, two, one. OK, thank you, everyone. OK, question. I had a question. In terms of building this purpose-built cloud, you basically tailored the cloud for performance. What were the trade-offs in that? That's my first question. And the second question is, what about the resiliency aspects? I think you show a number for 100% in the last two years. But I think that was for another cloud. It wasn't for Tegen. So if you can address those two aspects, thank you. So in terms of uptimes, Tegen has got an uptime of 99.8%. It went down for a while, but it was due to power failure. And we fixed it. The other question was, what did we have to sacrifice? Absolutely nothing, apart from spending a bit more money. That's it. Thank you. Thank you, everyone. Thank you for coming to our session.