 Yeah, morning everyone. My name is Harry Lee, and today I'll be talking about Kubernetes on the edge with K3S for a smart metering use case. So just a quick introduction about myself. I'm the co-founder and CTO at Media AI. I've been a DevOps enthusiast since, I can remember. Nowadays, I help organizations adopt DevOps practices and implement cloud-native technologies by helping them navigate the cloud-native landscape. I've been working with Kubernetes for more than five years now. I'm also a certified Kubernetes administrator, as well as a certified Kubernetes security specialist. I've been, I'm the CNCF ambassador based in South Africa for three years now. I've also founded the cloud-native computing meetup in Germany, South Africa, since four years ago. Currently, we're 980 members strong, still growing. You can find us on the CNCF community group. I love connecting with cloud-native enthusiasts. If you would like to connect with me, follow me on Twitter, as well as LinkedIn. So quick two words on Media AI and what we do. We are an ML Ops consultancy. We specialize in ML and DevOps practices, helping data science teams build and deploy machine learning solutions reliably and efficiently, effectively helping them deliver business value as fast as possible. And of course, we do everything in a cloud-native way. If you're interested in finding out more about us, check us out at media.ai. And we do regular posts on our LinkedIn page. So if you want to know more about ML Ops, please follow us on LinkedIn. Cool. So for this talk, I would like to take you on a journey on how we have helped one of our clients build up a smart mentoring solution. So I'll be focusing on three crucial aspects, the product that we're trying to build, the challenges that we face while we're building this product, and the design that ultimately resulted in our final delivered solution. I'll start off by giving you some context and provide some background into what our client is trying to build. And then I'll go into some of the considerations and limitations with what they're trying to build. And this becomes requirements, of course. And naturally, requirements always have a way to become challenges. So I'll highlight the four main challenges that we faced while building this product, provisioning, networking, high availability, certificate management. These may seem trivial in a cloud environment, but on an on-premise edge setting, it's actually proved to be a little bit more involved. The design aspect of the talk will be continuously updated as I go through the challenges, and then I'll end off by showing you the final design that we have implemented. So why are we building this? In South Africa, we have been facing an energy crisis for the past 15 years, where our demand for electricity is far greater than the supply. So our national power generator, ESCOM, to tackle this problem, implement something that we endearingly called load shedding. So basically, it is just scheduled rolling blackouts to the national grid when the load increases. So we actually sometimes only get like a few days worth of notice before they routinely switch off your power to your house. And recently, we actually only get a few hours of notice before they switch it out. So you can imagine the economic impact this has on businesses within South Africa. The output of these businesses decreases, right? So businesses that rely heavily on electricity, such as your manufacturers, as well as the mining industry, gets impacted a lot. And with the limited supply of electricity drives up the electricity cost to a point where small businesses actually go out of business. So our client sees this as an opportunity for a smart energy management system. So the aim is to target big energy consumers, such as your office blocks, as well as industrial factories, as I mentioned, mining industries specifically. And the aim is to use IoT devices to measure energy usage. From that, do some calculation to estimate the costs. And then from that, optimize electricity usage with some form of automation. And a very good use case of this is to switch off appliances at night when no one is using them. Or recommend the best time, such as off peak, to run various heavy machinery. So why does our client decide to build this and not just buy something off the shelf? Because IoT is really something that's been tried and tested. So one thing to note is that our clients is actually in the hardware manufacturing business. Their expertise is in building low cost energy measurement devices. So they foresee this as an additional service offering that they can provide to their existing customers to reduce their energy spending. And from doing this, they envision that they can capture the inside value chain. So how does it actually work? Let's say our client's customer could be an office block with multiple offices. There's appliances in each office, such as the air conditioner or water heaters. Maybe there's a lab full of motors. The idea here is to attach IoT devices designed and built by our clients to be installed on site and on the power supplies of these appliances so that energy consumption can be measured. The data from these IoT devices can then be sent to an on-premise gateway, which is also designed and built by our clients. The gateway provides a secure six low-pay network. So for secure communication between IoT devices, then the data is then transmitted from the IoT devices through the gateway to a central data aggregation platform by the MQTT protocol. The IoT devices provide sensor data, so telemetry data, consisting of measurements such as voltage, current, and power. And the gateway just provides this consistent way to communicate that with the central data aggregation platform. So the central data aggregation platform is a way to provide compute that hosts several custom services built by separate development teams written in different languages. And these services perform various tasks, ranging from data aggregation and compression to data analysis, as well as an IoT platform, to provide web services that allows a user to control these IoT devices. And through the central data aggregation platform, the customer is then able to monitor the IoT devices on site and perform some operational tasks, such as switching on and off the IoT devices, as well as generate detailed usage reports. As part of the service offering, the central data aggregation platform is also able to be connected to the cloud to provide advanced data analytics, such as forecasting the energy costs for the current month and long-term data storage. But this is not a requirement for the solution to work, since some customers may not want their data to be shared with the cloud as they envision this could be some sort of a security and governance concern. Sorry, governance concern. So as you can see, there are quite a few components that require different expertise. And because of this, multiple teams were brought into this project. There's a team focusing on building the IoT devices, a team that's building the gateway, a team that's building the IoT platform, and a team that's doing the advanced data analytics in the cloud. So for this talk, I will be focusing on the design and implementation details of the central data aggregation platform. So what are some considerations? Since we're building a solution that is primarily for companies and industrial plants, it's common for these industrial plants to be located in rural areas. So there are several considerations around infrastructure limitations. First of all, intermittent network connectivity between the site and the public internet is to be expected. Since these plants and factories are out in the rural areas, they may rely on something like GSM, 3G, 4G, mobile cellular towers for internet connectivity, and we know that it's not the most reliable. So this means our solution still needs to work even when we're offline. Since we're dependent on mobile networks, the data transfer costs must be considered. So it'll become quite expensive if we don't keep it under check. This means that we wouldn't be able to continuously stream data into the cloud. So our client also wants the flexibility of moving to another cloud provider in the future with minimal friction. So they don't want this solution to be too coupled with the cloud provider's managed services. On top of this, they also want us to use as many open source technologies as possible. And this is music to our ears because this screams cloud native. Yay. So we perceive that there may be customers who don't really want to be connected to the cloud, so they only want operational components of the solution. So in these cases, we would treat it as an air-gapped installation. So let's zoom in a bit into the site and talk about some considerations there. So similarly, inside the customer's premises, there may be intermittent network connectivity. So our solution needs to ensure that the IoT devices and applications are able to buffer messages when connections is lost so that they can continue to transmit when connections are sold. And to reduce setup costs, we would leverage the customer's existing network infrastructure, which means we wouldn't have full control over it. And during power outages due to load shedding, our solution needs to be resilient and continue to work when the power does come back. And since this is an on-premise solution and the customer may not have a server room, let's say the servers are prone to things like hardware failures due to environmental factors. And this could be dust damage, water damage, et cetera. Human errors is also quite common, such as someone just accidentally unplugging the power. So by adding multiple servers scattered around the premises, we reduce the risk of a single point of failure. And this ensures that the IoT devices can continuously send data to the central data aggregation platform. Of course, for customers that have a high-risk appetite and don't mind the downtime and want to save costs, we should still provide them with the ability just to install a single server. Cool, so why Kubernetes? Once we have some clarity about the requirements, the first thing that comes to mind is, let's use Kubernetes for this. Of course, this is not a hammer and everything is a nail type of situation. Kubernetes does bring a lot of benefits to what we're trying to build. And here are a few main reasons why we've decided to go with Kubernetes. When the power does come back from these power outages, Kubernetes parts will continue to run from the previous state, since Kubernetes is built for resilience. High availability, and this needs no introduction. We just need to make sure that the Kubernetes control plane has a quorum, and Kubernetes will do its thing, even if you lose a node or two. Since there are multiple development teams working on this product that's using different languages, we need it into operability and isolation. And of course, this means containers, and what better way to orchestrate containers than using Kubernetes. What's also worth mentioning is that the advanced data analytics platform that is happening in the cloud is also built using Kubernetes. So we ensure that we have Kubernetes in the cloud and Kubernetes on the edge. Last but not least, the ecosystem. And CNCF is our friend here. With the vast landscape of projects surrounding Kubernetes, we can incorporate a lot of CNCF projects that we know will natively support Kubernetes. And we wouldn't have to worry about compatibility. So now we've decided that we're going to use Kubernetes with the vast amount of Kubernetes distributions out there, which one should we choose? So let's take a step back and look at what we have to work with here. So the server that our clients has procured is actually an Intel NUC mini PC, which has four CPU cores and eight gigabytes of RAM and 500 gigabytes of SSD storage. So as you can see, we're running on limited resources here. So we need a Kubernetes distribution that's small in footprint and can run on-premise and on the edge. So after extensive research, aka Googling and testing, we came down to two options, Cube Edge and K3S. So both are great projects backed by CNCF with great community support and both are built for edge use cases. So it was a difficult choice, but we ended up choosing K3S. So why didn't we go with Cube Edge? The most compelling reason for actually shortlisting Cube Edge was that the architecture diagram that was on the front page of the GitHub repository was nearly identical to the architecture diagram that we're trying to build. It's got both the cloud and edge components. It's got a built-in MQTT broker which uses Eclipse Mosquito and it's got out-of-the-box ability to sync IoT device statuses to the cloud. So why don't we choose Cube Edge? So if you recall that one of the requirements is to actually allow the customers to have an on-premise-only installation. So having a cloud counterpart doesn't actually work. So Cube Edge has a way to actually do away with the cloud counterpart, but it proved to be a lot difficult then what it's worth. And since we're not the only team that's working on this project, we have to consider other teams that are also involved. And because of this, we're not in a position to dictate how other teams build their solution. So the built-in MQTT which uses Eclipse Mosquito is actually not something that our core messaging team wants. They've already decided that they're gonna use Rabbit and Q for this. And the ability to sync IoT device statuses to the cloud, our IoT platform team have already decided that they're gonna build themselves in the Java application. So this opinionated architecture on how to actually set up an IoT platform isn't really something that we're looking for. Since we have so many custom layers. So we needed a Kubernetes distribution that is flexible enough so that we can customize it to our needs. So we ended up choosing K3S because it doesn't have an opinionated architecture for an IoT use case. What we also like about K3S is that it supports something like a static pod but with HelmChart. So it's got a HelmChart operator that automatically installs HelmChart for you and HelmChart is the factor of standard that we're going to use to install applications within Kubernetes. And it also supports a single node out of the box. So it gives you the ability to install the control plane as well as the worker node together, which is great for the use case where our customer only wants a single server. Great. So now we have decided that we want to use K3S. What's next? We have to install it on the server. So how do we actually install Kubernetes on bare metal? This comes to our first challenge, provisioning Kubernetes. And you would think that it's actually not that difficult, right? I mean, from K3S homepage, you just write a single line and the K3S will be installed. The real challenge here is actually to get it installed on the fleet of these bare metal servers. So first, we need an operating system. We've chosen Ubuntu 20.04 and it's just because our clients engineers are more comfortable with Ubuntu as a distribution. That's any distribution that supports K3S will work here. And now it's time to get this operating system installed on the fleet of bare metal servers. And we didn't really reinvent the wheel here. We've decided to just go with a normal pixie net booting approach. So this is a way to just automatically install operating systems on servers over a private network. So we set up a private network with a switch and plug in an extra server, which we've named it a provisioner to provide the pixie net booting functionality and all we got. Next, we need to do some configuration management, right? Because now we have to install K3S. For this, we've chosen Ansible as our configuration management tool. It's just because our team is more familiar with it, but any tools such as SharefulPuppet will work. So we've installed Ansible onto the provisioner server, which already is configured and attached to the bare metal servers. And yeah, we just ran the playbook from there. So the tasks that we run, listed here, updating and installing operating system packages such as enabling app armor, doing some security settings, installing extra packages such as CS certificates. Of course, installing K3S itself. For this, we're actually leveraging very popular open source projects by Pirate Labs. It's fantastic. It's got sensible out-of-the-box defaults, and you can actually get a K3S up and running with minimal configuration. So once we have installed K3S, we also installed some platform services, and this includes services that will support logging and monitoring, such as Prometheus, Grafana, as well as Fluentbit. And then last but not least, we needed to generate clients and as well as customer-specific configurations, such as the device list for the IoT devices, as well as some IP addresses that we need to pre-configure. And then we copied them onto these bare metal servers. Great. So let's look at what is at the end of the stage. So we've got a Kubernetes cluster. We've installed some services. We've broadly categorized all of the application services into two categories. So the workload services and then the platform services. The workload services consist of custom applications that our team has built that serve specific functionalities. So over there, we've got the IoT platform, and the IoT platform is the brain of the entire operation. It allows end users to interact with the IoT devices, view the current energy usage, et cetera. The message service is responsible for data-scribing, compression, aggregation, and routing. The automation service is actually just a group of current jobs and jobs that is triggered by the other services. And last, we've got the manager service, which is used to bootstrap the entire cluster as well as to ensure that the server is actually in a healthy state. So the platform services includes peripheral services, such as Prometheus and some of the operators that we have built. The operator's job is just to make sure that the services inside the workload category is running in healthy states. There are also optional platform services, such as the EFK stack, which is the elastic search, FluentBit and Kibana, as well as other dashboards, such as Grafana and the Kubernetes dashboard. The reason these are optional is because of the limited computing resource we have on our hands. So for customers that are willing to purchase a much bigger server, so if you recall, we're using 8GB RAM, but if they opt for the 16GB RAM, we would then optionally install these for them. Great, now we have all of the components. Let's see how we should deal with the ingress traffic coming into our cluster. So this comes to our second challenge, networking. So let's revisit how we intend the data to flow from the IoT devices into our Kubernetes cluster. The IoT devices sends data to the gateway by the six-low-pad network. The gateway, which is connected to the internal network, then aggregates and sends the data to the RabbitMQ message broker within our cluster. Similarly, the customer wants to be able to log into the IoT platform dashboard to interact with the IoT devices. But the first challenge comes where, how do we actually find out where the cluster is? And this seems like a problem for service discovery. So let's talk about how we would do service discovery. The straightforward answer here is, it's just assign a static IP to this bare metal server, right? So we go to the router. We'll find an IP address that's not part of the DHCP server, and then we'll reserve it by its MAC address. So that's great and all. But what if we have a cluster that has multiple nodes? Now, do we do the same thing on the router? Do we go into a router and then configure a static IP address for each node? So it's fine for a three-node cluster, but what if we have a five-node cluster or a seven-node cluster? So that becomes quite hectic as now you have to send this list back to the gateway or the end user to make sure that they can do service discovery. And this also begs another question, how would failover work in this case? All right, what if we add a load balancer in front of the cluster? Because this is what we're doing in cloud, right? You add a load balancer that the cloud provider provides, open up some node ports, point the load balancer to those node ports, and off you go. But because we're working on on-premise edge, this actually doesn't really work that well because now we have to add an extra server which we can't guarantee high availability and this is actually a single point of failure. And now we also have to implement custom logic for service discovery on this load balancer. It's just too much work. So we came across something called the MetaLB. So in order for MetaLB to work, we basically bring the external load balancers logic into our cluster. So first of all, we need to make sure that MetaLB has an IP address pool. So we need to find an IP address range that is not part of the DHCP server. It can be handed out to the services that require a load balancer type service. And MetaLB actually allows you to operate in two modes, the BGP mode as well as layer two mode, which is the ARP, which is the ARP mode. And we've opted for the layer two mode because ARP is kind of like the factor standard for any Ethernet network. And remember one of the requirements was that we couldn't make any assumptions on the customer's on-premise infrastructure. So this may seem like a lot of jogging, right? Let's see how it actually works. So imagine we have a three node K3S cluster, all right? Node one, node two, and node three. And we've got a router that's running a DHCP server within a range of one and two.16.1.10 all the way to .200, great. And we know this DHCP range beforehand and we decided to find an IP address that's outside of this range. Let's say we use .201. So we now configure MetaLB by Ansible to use .201 as an IP address it's going to grab. So when the nodes are connected, each of the nodes gets automatically assigned an IP address that's part of the DHCP range. Let's say node one gets .11, node two, .12, node three gets .13. Upon starting up, MetaLB says we've configured it to grab .201, it's going to do so. It's going to grab .201. To be the IP address and assign it to node one. So now we can let all of the other external services know that you can use 192.168.1.201 to access our cluster. Cool, so when external request comes into the router, the router sees it, sends it to node one. So what happens when no one fails, right? Because MetaLB is actually installed as a demon set on all of the nodes, it will see that no one has failed and then it will actually move that IP address to node two. So now we're still going to stick with .201 and we don't have to worry about telling the external services about this changing IP address. Great, now we've decided to use MetaLB as something to control all of our ingress traffic. K3S by default actually comes with its own service load balancer, which is called SCCLB. That's something that we have to disable as well as the ingress controller that comes default with it, which is the traffic ingress controller. We also have to disable it in order for MetaLB to work. Cool, so now we've addressed installing K3S onto this bare metal server. We've also addressed how we manage traffic coming into our server. So this actually works if we have stateless applications, right, because when no one fails, we move the pod to node two, but it goes on, everything is great. But with this work, if we're failing over stable applications. So, say for applications with K3S by default, it comes with Ronja's own local path provisioner, but this only gives you the ability to create persistent volumes on the current node that the pod is provisioned. So what happens if that node dies, right? The pod can move, but the persistent volume wouldn't because persistent volume is actually allocated onto the first node. So we need a way to also replicate or move that persistent volume to the second node. And this is something that we need distributed persistent storage solution for. So we started looking for a distributed persistent storage solution. The first thing that we came across was Rook. Just because we've used Rook a few years back, it's great. It's based on Ceph, which is a highly scalable distributed storage solution for block storage, object storage, and a shared file system. It's feature rich and built for resilience and scale. But it's actually a little bit too complex for our current use case because especially for a single node setup, you actually have a lot of overhead just to run Rook. So we set out to also look for an alternative persistent storage solution. After some more Googling, research, and testing, we shortlisted yet another two CNC project, Longhorn and OpenEBS. Again, both are great projects with great community support and a very good fit for our use case because both are easy to install. Both support PV replication, which is what we want. We wanted to make sure that the PV actually is replicated to another node when failure over happens. Both support incremental snapshots, great for backup and restoration. But in the end, through our comparison, we still selected Longhorn. Why? It's just because the only most compelling reason for that is because it's under the Rancher umbrella. So K3S, also a Rancher-backed project, officially supported. So we perceive that future upgrades and compatibility will be a lot smoother if we go with Longhorn. Great. So now we have a way to solve our stateful application failure over. So now we've added Longhorn into our platform services. So now we've installed K3S, we've managed traffic coming into our cluster, we've managed storage in a distributed and persistent manner. How do we actually access these services securely? And this, we use something called Cert Manager. But before we get into that, let's talk about how the traffic flows through our cluster again. So we know that our cluster currently hosts a few web services, such as dashboard and the IoT platform itself, as well as the message broker which uses RevellingQ. So the IoT devices, when communicating with the gateway, uses a six-load pan network. That's fine. The gateway and an IoT team, they've got their own solution to deal with secure communication, so that's outside of our jurisdiction. What we need to worry about is the communication between the gateway and our server, as well as the user and the server. So we have to ensure that we use MQTT-S, which is a secure version of MQTT, as well as HTTPS, secure version of HTTP. So the reason why we decided to go with Cert Manager, it's actually, there's no other tools out there that's comparable with Cert Manager inside the CNC of Landscape. So it's kind of the de facto tool for any certificate management within Kubernetes. And we have great successes with it in the past, so we didn't look too deep into the alternatives. So how do we manage this certificate management, sorry, how do we manage this issuing of certificates? First of all, we needed to self-sign a root CA, so we created and then we make sure that we store it in a very secure and secret location. Then from that, we create an intermediate CA per customer. From that, we create another intermediate CA, but on a per site or per factory level. And then we moved that intermediate CA into our cluster for Cert Manager to use as a cluster issuer or custom resource. Great. One thing to note is that the creation of the root CA as well as the issuing of the intermediate CAs are handled outside of the cluster, handled on an organization level. And the tool we used for that is Hushikov Bolt, but we don't have to get into that. The most important thing is the final intermediate CA gets into our cluster and being used by the Certificate Manager. So the Certificate Manager, upon receiving the intermediate CA, will do its thing and start issuing leaflet certificates. So for HTTPS, the ones that we need is, for example, the Rabbit MQ Management Portal, that needs to be on HTTPS because that's how the end user will access Rabbit MQ. Kubernetes dashboard, Prometheus dashboard, Grafana dashboard, et cetera, will all use leaflet certificates that's issued by Cert Manager. Of course, our custom workloads also needs to be protected by HTTPS. On the MQTTS front, we use something called Mutual TLS. So all of the clients that's actually communicating with Rabbit MQ, including the gateways, needs to have a client certificate, and Cert Manager actually allows you to do that. So we have a part of the automation process actually tells Cert Manager to issue client certificates whenever we add an IoT device. So this comes to our final implementation, which is just adding Cert Manager into our Platform Services category. So now we've got MetaLB managing our services coming into, sorry, managed traffic coming into our cluster. Cert Manager protects their traffic and Longhorn protects the stateful applications that we have. And this is a slide of all of the versions that we're talking about in this talk. If you're interested, you can have a look at them. So in conclusion, we've just talked about the product, what we're actually trying to build, and the considerations for an on-premise and edge use case is what we've highlighted. We've also talked about how we've implemented the central data aggregation platform using Kubernetes and a few CNCF projects, K3S, Longhorn, and Cert Manager. MetaLB is not a CNCF project yet. We've also outlined some of the design challenges along the way, which is the four main challenges on provisioning, networking, high availability, and certificate management in an on-premise edge use case. So yeah, so that's the end of my talk. Thanks, everyone. Awesome. So yeah, so if you can, please leave some feedback for me. That'll help me a lot in my next talk. That'd be great. And yeah, happy to take any questions, if anyone has any. There. Sorry. Sorry, I can't hear you. Thank you. Great talk, by the way. One of the best. How did you deal with distributing dependencies to the IoT devices, like when you needed to update Prometheus or something else, were you caching them on the provisioning node, or were you just pulling them down from a network call? Yeah, so because some customers will have a gap to installation, so over-the-air updates is kind of like not possible. So what we do is we actually dispatch a technician who brings the Ansible script to the site and then run Ansible script to updates or provision those IoT devices. And this includes updates as well as adding and removing devices, yeah. Cool, thank you. Since you disabled traffic, what did you use for interest? So we didn't. So basically all the services just have a load balancer type. And then that just automatically, Mr. RB will see it and automatically assigns the same IP address to all the load balancers. So the thing is we just have to make sure that the ports don't conflict, yeah, because we're operating on a sort of like a lower layer, not a non-layer same or anything. So we have to just make sure that the ports don't conflict, yeah. Cool. Any other questions? Yeah, just sort of curiosity really, when you sort of mentioned CNCF projects or other CNCF projects, and was there a reason why you sort of went down the Kubernetes road and not something like NATS, which is a sort of high performance sort of messaging system that's sort of quite suited for something like this? So just curious really. So the reason why we're going for Kubernetes is actually because we have a lot of teams that's working on those projects, right. And then one thing that we said up front is that everybody should just use containers just because it's a lot more portable. The people who's actually provisioning and installing it don't have to understand dependencies because you just wrap it up in a document you can deploy it. So we just wanted to do something such as containers and we need something to orchestrate it and that was the only reason why we use Kubernetes. That's the most compelling reason. And then of course there's also other thousand whistles that comes with it such as high availability, resiliency, et cetera, yeah. But the main reason was the standardization that we needed, yeah. How much of data you are dealing with the PVs and replications and everything? Sorry, can you please repeat the question? How much of data are you dealing with while replicating the volumes and then creating them? So it's actually not a lot of data just because we didn't need a high resolution data from those IC devices. So they only transmit once per minute. So we're talking about like 23 kilobytes like in a few hours or something. So it's like very, very small just because we didn't need like, you know, per second transmission of it's just because that resolution doesn't really mean anything from a controlling perspective. As well as the tariff, for example, when you're actually looking at how energy is measured and costed, they do it on a per hour, even sometimes not even that on a per day basis. So you don't need a high resolution. So yeah, that was it. Thank you, cool, thanks. Hey thanks, I don't know if you mentioned this, but what about multi-site aggregation of data? Is that multi-site aggregation? So you have customers that may have had multiple physical locations or multiple clusters. Did you aggregate that data at a certain level? Yes, yes. So there's actually only a single Kubernetes cluster in the cloud that all of the customers as well as the sites connect to. So it's kind of like a hub and spoke sort of setup. So just one big cluster in the cloud and a multiple smaller cluster all over the place on the edge. And the reason just for doing that is because we just, we didn't want to have the maintenance overhead of maintaining a lot of Kubernetes clusters in the cloud. So we just have one team that's managing that. And that's a setup, yeah. I don't want to talk about cloud because this is an edge talk, but yeah. Okay, that's it, great. Thanks everyone.