 Great. Well, first off, thanks so much for coming to the session. I do have a request before I get started. Because the confidentiality of this session, I wanted to ask that you all agree to not share any of the takeaways from this session with anyone else. I thought that would be easier than making you all sign an NDA. Is that fair? Okay, cool. Right, so my name is Laconde Mwila, and I'm a developer advocate for Kubernetes at AWS. I'm part of the EKS team. I'm also a CNCF ambassador, and I'd love to connect with as many of you as possible. And if I don't get the chance to interact with you directly here, please feel free to reach out to me either on Twitter or LinkedIn. The developer advocacy team that I'm a part of also has a dedicated YouTube channel called containers from the couch. Some of you might have heard of it, and we post a lot of Kubernetes and cloud native related content over there. I also have a channel of my own. If you want to search for that, you can just use my full name. Great, so I don't know about you, but one of my favorite things is to hear people's Kubernetes introduction stories, how they first came to work with the technology in a hands-on way, not just hearing about it. And I find that generally people fit in one of three categories or camps. There may be more. And the first one would be the group of people who hear about it from someone else. Maybe they read an article, maybe they watched a video, or worked closely with a team that was using it, and they heard about his capabilities, and so that's what got them into it. So if Kubernetes was a destination, they willingly buy the bus ticket to get there. The second group of people are those that don't really have much of a choice. So it's kind of like being told, hey, here's the bus ticket. I'm gonna drop you off at the station. Whether you like it or not, you're gonna use Kubernetes. Third group of folks, the K8s chose me is a little bit of a conspiracy theory. But I believe it, because I think I fall in this category. And with this one, some of you might relate to this. Everything was a blur. So many things were hazy, but you just remember you woke up one day, and you were surrounded by Kubernetes, and you were required to implement best practices in the short amount of time. So a couple of chuckles, I think some of you can relate to that as well. So not so much of a conspiracy theory. Now, I think you'd also agree with me that when you just get started with Kubernetes, or actually with almost any new piece of technology, you're excited to, and eager to be an expert. But when you look back on your journey, you'll quickly find that you were quick to come to certain conclusions. And one of the conclusions we came to too quickly was, okay, so this is basically just about figuring out YAML and we're pretty much set. That's half the battle. Time would eventually reveal that we were very wrong. Right now, so a little bit more about the context, right? So I was part of a team, and we had been on a couple of adventures together, and this was at a financial institution. And we were the platform team responsible for migrating a particular microservice based project into the Kubernetes environment. And so we were dealing with a number of microservices that were all going to run as stateless applications in the clusters environment, but they needed to connect to a bunch of databases outside of that particular context. So we needed to find a way of storing the sensitive database credentials. And so that's how we came across the concept of secrets. So we were going to use these secrets in Kubernetes in order for our applications to be able to connect to the relevant databases. Now, if you'll indulge me just a little bit, I want to cater for people who may be completely new to this concept. For those of you who are very familiar with this, the more exciting stuff, as it were, will come a little bit later. But just so everyone's on the same page and we're tracking together secrets in the context of Kubernetes are one of two types of resources that are used to store application configuration data. The other resource type would be config maps. Primary difference between the two is the fact that secrets are used to store small pieces of sensitive information, like database credentials, OOF tokens, and TLS certificate data as well, among other types of data. And over here is just an example of a manifest. And you'll see over there some usual suspects like API version, kind metadata. Those are things that will be universal to your Kubernetes resources. Unique ones in this particular case would be type and data. So data you will also find in your config maps. And these are the key value pairs that point to the sensitive pieces of data that you have. And the type essentially dictates the kind of sensitive information that you'll be storing inside of your secret. Now, when we came across this concept, we had some short lived joy. There was joy because we thought, OK, well, we now know the component that we need to make use of in order to store our database credentials so that they can so that relevant applications can connect to the DBs. But then that bubble was burst really quickly when we found out the secrets are not so secret. They're not encrypted. Now, before we lost all our hair from it being pulled out from our heads, we thought, OK, let's take a step back and let's understand how secrets are actually stored inside the context of a cluster and the flow by which they get mounted into a pod. And so secrets are stored inside of the SED data inside the SED data store or database and they're stored there in a non encrypted format by default. And then as for the flow, the cubelet, which is an agent running on the node or the host inside of your cluster is going to make a request to the SED data store through the API server and then is going to mount it on the relevant host in the temporary file system, specifically on the hosts where there are pods that are trying to consume these secrets. And then finally, our pods can either mount these secrets either as environment variables or volumes. So this was a good starting point for us to figure out how this flow worked. The next thing was to go back to the panic room and figure out, OK, what exactly are the red flags? What are the main risks and vulnerabilities that we need to be aware of? And so we got it down a couple of them. And the first one is probably the most obvious. That is the fact that secrets are stored in a non encrypted format in the SED data store, which was obviously a very interesting conversation with the cloud security team. Again, maybe some of you can relate to that. And then the other thing was the fact that we were used to working with Git and a lot of our Kubernetes configurations were stored in Git repositories. But there was obviously a challenge now when it came to secrets because there would be sensitive values in those particular manifests, which at most can only be base 64 encoded, which isn't actually encryption. And that was going to present a challenge for us and I'll elaborate on that a little bit further in the in the presentation. The next thing was the fact that we were concerned about mounting our secrets as environment variables because the values would be exposed to every single process inside of the pod. And they'd also be a very high chance of the sensitive values being logged at some point. So we considered the option of mounting our secrets as volumes instead. But then even with that approach, we still had to consider, well, how safe are the volumes that these secrets are actually being stored on? So we'll still need to think about security pertaining to those specific volumes. And then lastly is root user exploitation. So if you think back to the flow by which secrets get mounted into a particular host, those secrets are stored in temporary file systems. Now, these file systems are managed by the host. And so if at any point you have an attacker or a malicious user that gains privileged access or root user access, then they could easily have access to those particular secrets inside the temporary file systems and that would give them access as well to the API server. So leaving the panic room, this was pretty much the thinking of everyone. So working with secrets and Kubernetes is not going to be as simple as we thought it would be. Nonetheless, this is the quest that we've chosen to go down. So we need to figure out how we're going to address each one of these particular red flags that we had come across. And in this particular process, the reasoning that we basically followed was let's take a step back and consider how you would keep a secret safe in general, even outside of the context of technology. The main things that you'd be concerned with is where is the secret kept to begin with or where is it stored? And the next thing is who needs to know about the secret? And from there, you start thinking, OK, great. So how is the secret then shared? In addition to that, how is it going to be consumed by the relevant parties or entities that need to consume the secret? And then we also have to think about in the case that something goes wrong, which is not a fun conversation or an interesting thing that you actually want to do. But you have to think about it nonetheless. How do we prevent our secret values from being easily interpreted in the case that something does go wrong? And then lastly, are there any additional guardrails that we can put in place to dictate how our secrets are used to prevent to prevent things from going wrong? Basically, how do we detect violations that put us at risk? So the first thing that we had to consider is where are we going to keep our secrets? Where are secrets going to be stored? And to begin with, this was the platform team. The secrets were kept up here in our heads. Maybe some of you can attest to that. I won't ask you to raise your hand. You might get in trouble if you get seen. And so one of the reasons we delayed, though, was because we wanted to fit within, we wanted the secret strategy to fit within our GitOps approach, which I'll get to shortly. But because we were storing the secrets in our very secure heads, this did mean that there were some bad practices taking place. If you needed to share the secrets with someone else in the team, then let's just say there was sensitive information being shared over Slack, over Microsoft Teams. This isn't being recorded, is it? And of course, this was happening quite a lot for different environments. Now, I'm going to speak a little bit about GitOps, because this is what we had decided on for our change control process for our workloads and our infrastructure. And for those of you that may be new to this concept, it's basically a paradigm or a model that brings together Git and DevOps workflows and allows you to essentially extend the source of truth for the desired state for your cluster's environment. To a Git repository or a home repository, and specifically in the context of Kubernetes, you would have a GitOps operator like Argo CD or Flux or Fleet. I'm not sure if there are more tools out there. And the role of this GitOps operator is it's an application that's essentially watching the source of truth where your desired state is defined. So that could be a Git repo or a home repository, and it will continuously compare that to the destination, which is the live state of your cluster. And in the case that it picks up on any deviations or any differences, it will then attempt to reconcile that either automatically or simply notifying you depending on how you configure it to work. And just real quick, so in our case, we were using Argo CD, and I will speak a little bit about Argo CD applications a little bit later on. So I just want to make sure we're all tracking an application in the context of Argo CD. It is a custom resource definition that defines a connection between a specific source like a branch in a Git repository and a namespace in your specific cluster. So just keep that in mind when I do speak a little bit about them later on. Right, so now I'm going to speak a little bit about secrets in Git. So we've got our GitOps approach, and we know the vulnerabilities that exist around secrets. And I want to elaborate on them a little bit further in case someone here may be thinking, well, I still don't get it. So again, the easiest one is the fact that secrets are not encrypted. Your manifests are only going to mask those values with base 64 encoding. And remember with Git, it's essentially a collaborative tool. You have a lot of people that will continue to be coming into the project accessing that repository and that lifecycle continues. So you're essentially exposing these sensitive values to a host of different people that will be able to clone these repositories and commit to them on and on. In addition to that, there's the challenge that you can apply fine grained access control to like a subfolder in your Git repository. So that means just about anyone that has access to that specific repository, they'll be able to have access to the relevant files in there. And then there's the issue of commit history, which isn't actually an issue because we love commit history. However, if you were to commit a secret into a particular repository and then realize actually this is risky and you remove it. Someone that gains access to that repository later on can still make use of the commit history to check out at an earlier time and get access to those sensitive values. So considering these different things and we basically came to the conclusion that we're not going to force gets to be something that it's not. We're going to use it for what it's actually made for. And instead we're going to use a managed secrets solution. And that's what we did. That was something that was going to give us the encryption of all of our sensitive data at rest. We'd be able to apply the relevant fine grained access control. We'd also be able to check out the logs to see how they were used for auditing purposes. In the case that there was something, a red flag that had come up, so to say, for how a certain secret was being used. Now, the next thing was to consider how are we going to get our secrets from this external source into our clusters environment. And that's when we came across this particular tool that we used, the external secrets operator. And so I'm going to elaborate on some of the components in the architecture for the external secrets operator. And so I'll start with the secret store. The secret store is a namespace resource that defines how your secrets are going to be accessed from the external source. So it determines how communication is going to work with the relevant API for whatever external secrets manager that you decide on. Now the secret store needs to reference something that will allow it to authenticate and authorize successfully against the external API. Now, one approach is to store your credentials in a Kubernetes secret, as you can see over there. But the problem with that is it kind of takes us back to square one. We're trying to move out of that particular model and how are we going to manage that secret? So now, thankfully, I would say though, if you're just trying this out with a POC and you just want to see how the external secrets operator works, then sure, you can follow this approach, but I wouldn't advise it for production. Now, thankfully, we were doing this in an AWS environment. So instead we used IRSA, which is I am roles for service accounts, which allowed us to essentially make use of temporary accession tokens from an OIDC provider. And that was bound to a specific service account. And the service account was the one that the external secrets operator was using. And that service account would then assume a role in the AWS environment so that it could access the secrets in the external source. The next component that I want to cover is the external secret custom resource definition. So where is the secret store defines how secrets will be fetched? The external secret deals with what will be fetched and what's target secret will be created. So this is where you would define the specific values that need to be fetched without actually putting the sensitive values in there, of course. And then this is where you would specify the secret that should be created. So the external secret is then going to create what is known as a target secret. Now, some of you might have already picked up on this. So what do we do about that target secret? The good news about the target secret is that, well, we don't have to keep it in our Git repository. So we're safe on that front, but it kind of takes us back to the issue with the lack of encryption in the SED data store, because that secret is still going to end up over there. And so to address that, we used envelope encryption with KMS. And so thankfully, with this particular approach in the AWS environment, the data, the data keys that were used are not necessarily retained or managed from that context. It's only available to the user. And you can have a custom user, you can have a custom key for that as well. And the data is then stored locally as well as the data keys which get encrypted. And that'll be important for another component later on. I also want to mention that on that front, that was us partially taking care of what happens in the case that there's some exposure of our secrets. How do we prevent it from easily being interpreted? So that dealt with it in part. But then the next thing was who needs to know about the secret. So for this, we had to consider how we're going to protect our secrets from two perspectives, users as well as workloads. That was helpful for us that with the particular model that we went with, we were using Argo CD. And so there were lots of cluster personas. Of course, there were the platform engineers who would have access to the API server directly. But we also realized that, well, there are some cluster personas in this case that should never have access, direct access to the API server. Like your application developers. Like solution architects that wanted to have visibility of components inside the cluster's environment as well as QA testers. And so in this case, Argo CD does expose a UI where you can see what's running inside your cluster's environment. But you can narrow that down to specific applications. And applications would then fit within a particular project. And so we were able to apply role-based access control on Argo CD's front. And there were some cluster personas who would only have access to the cluster as it were by seeing what they need to see through Argo CD. And if you see this line right at the bottom here, it's just to represent the fact that none of them would have direct access to the API server. So that was one approach of, so this is an approach of protecting your secrets so that some folks just never have access to the API server to begin with. But then we also have to consider workloads, which is a very important aspect of course. And the first layer or the foundational layer that you have to consider is the namespaces. And sometimes namespaces get a bit of a bad rep because they don't have hard or strict isolation. So for example, with network traffic, you can have easy network traffic between pods that are in different namespaces unless you apply network policies or MTLS or authorization policies with a service mesh. But by default, there's back and forth network traffic. However, in the case of secrets, it actually is a good foundational layer to have your workload separated in the relevant namespaces because an application in namespace A, for example, is not going to be able to mount a secret in namespace B. Now the next thing that we had to consider obviously that each workload should make use of its own distinct identity, which is provided to us by service accounts, and then binding each service account to a specific role. And in that role, defining the relevant permissions that are only necessary for that particular workload. Now roles do not allow you to have explicit deny rules. You can only add allow rules. And so that's one approach of having the implicit deny rules to make sure that some workloads are not allowed to access the API server to carry out certain permissions. So if you see this diagram over here, you'll see with both the top and the bottom there are not allowed to run get secrets or list secrets or watch secrets. And so here's an example of a manifest with a role and a role binding. So at the top over there, we have a role that is explicitly adding allow rules for get, list, and watch only on pods and services. And right below it is a role binding that references this same role and then adds a service account as the subject, binding that service account to that particular role. So this is just an example of that. And then a really good exercise that you can run in addition to this is to actually use the kubectl off command and see whether or not a certain service account is actually able to get secrets or list secrets or watch secrets. And you can see over there, you'll notice that right beneath each one of those commands, there's a no being returned because those permissions have not been added to the specific role that the service account is actually using. So here's another thing that we can also another step we can take to protect our secrets from our workloads. And this will depend on the version of Kubernetes that you are running. So if you're running a version before Kubernetes one dot 24, then highly advise that you disable the auto mount service account token. And this is a specific token that gets loaded into a secret. And that secret doesn't this token does not expire. And the reason why this is important is because if you think back to the temporary file system where secrets get stored, should there be a an attacker that gets access to that temporary file system? This is probably one of the first things that they would want to go for because that token is what is going to give them access to your API server. So you essentially want to lock that down by disabling this where it's not necessary. And if you do want to make use of token approach, then you can consider looking into bounded service account tokens, which if I'm correct would function similar to the OIDC approach where you get temporary session tokens. So this is another angle that you can protect your secrets as well. Right. So we've considered where to store our secrets. And now we're going to look at how is the secret shared and how is it consumed? So I've already covered this in part. And so this is just a map of what our GitOps diagram or overflow kind of look like. And you'll see right at the top over there ESO, which is just standing for external secrets operator. So we were using Argo CD to deploy the external secrets operator into our cluster's environment. And then as for the workload, that is what would have a specific secret store CRD and an external secret CRD. Remember the secret stores what is used to define how the sensitive values are fetched and the external secret defines what gets fetched. And from that, the target secret would then be created and finally consumed by the pod. Next is consuming the actual secret. And so you'll see over here, so we went with the approach of consuming our secrets using a volume mounting approach. And you heard me mention earlier on when we were considering the different red flags, one of the things that we were concerned about is how do we actually secure our volumes. And so similar to what we did with the etcd data store, we used KMS envelope encryption for that as well to protect our volumes. In the case that there was some breach of a kind. So this is the second part of how you would deal with the issue of how do we prevent our secrets from easily being interpreted in the case that there's some kind of a breach or exposure. Right, so the last leg and probably one of my favorites is how do we manage violations that risk secret exposure. And so for this, I'm essentially advocating for dynamic admission controllers. And first, I just want to go through what the API request path would actually look like for some people that may not be familiar with this. And so these are the different steps that take place when an API request is actually made. And before something is actually committed to the state in the etcd data store. And so the reason I've put Argo CD on the far end over there is because that's what was managing our change control process. So first step is authentication and authorization. So role based access control would fit within the authorization module over there. And then next we've got mutating admissions. So in the case that you actually want to make changes to a particular resource before it gets committed to the state, then there's schema validation. And then last over here we have validating admission, which is the aspect that I'm going to be focusing on. And this is where we can essentially tap into the validating admission aspect via the web hooks using different tools. You could do this with caverno. You could also do this with OPA gatekeeper, which is the one that I'm going to focus on. And you could essentially define different policies that are specific to your company or your context that allows you to manage how secrets get managed in your cluster's environment or how you want them to be used for the different workloads. So here's another diagram just to depict what that would actually look like. So these requests before they get committed to the state would be intercepted and then would be checked by OPA gatekeeper, which would essentially be enforcing whether or not certain things can actually be committed to state depending on whether or not they align with the policies that you've defined. And another thing that I like about OPA gatekeeper is the fact that it also has an auditing feature. So it's continuously copying the resources in your cluster's state and checking those against your policies. And in the case that it finds that there's something that goes against the policy, it will flag that as a violation and you'd be able to quickly detect that issue as well. And this is useful because maybe you're upgrading OPA gatekeeper or maybe you only implement gatekeeper after you've already deployed workloads that would go against certain policies that you come up with later on. Right, so another thing that is useful, and this is not just specific to OPA gatekeeper, you can do this with other policy enforcement tools. Other cool ones that I'd encourage you to check out would be Detree and Coverno as well. Those are some of my favorites. And you don't have to wait till you get to that API request phase for your cluster. So you can shift this left and inside of your CI stage what you can do is you can essentially run tests based on your policies against the different Kubernetes manifests or Kubernetes resources that you're attempting to create. So this would be a stage before your resources end up in the branch that your GitOps operator is actually watching. And so here this is just a screenshot of one example from GitHub Actions and you'll see I'm running a conf test command. You can also use the OPA CLI in the case of gatekeeper. And I have a specific policy that I'm referencing and in this case I would be using the, as it were, raw rego files which is the policy language that you'd be using and testing it against the different resources that I want to merge into my branch that the GitOps operator is watching. And if there's any violation detected at least I can see it here and it never even ends up in that particular branch so we don't have to wait till we get to the API request phase. Great so what I'm going to do now is I'm just going to show you Argo CD with a couple of translations of what I was covering so you can see how it looks like in reality. Is that big enough? Okay cool, I'll see a couple of nods and thumbs up. Right so first off over here this is just to show you that the external secrets operator is being deployed by Argo CD. Lots of resources not too concerned about you memorizing this so I'm not going to get into detail of that just to show you. And then over here so this is an example of one application and I'm calling the e-commerce app and if I scroll down here you will notice the e-commerce external secrets and you'll also see the e-commerce secret store. So these are the particular resources that we're actually keeping inside of our Git repository and none of them necessarily have any sensitive values so you can see over here with the secret store the alternative approach of course would have been to actually store the credentials in a secret that we'd be referencing and like I said that's not something that we'd advise for as a best practice. And some of you might be seeing well what's going on over here we see a sync failed and there's an error and that's coming to the example that I actually want to focus on. Whoops. So this particular e-commerce role, click on that. And you scroll down and you'll see over here that it is giving permissions to get secrets now this will obviously depend on your context and what your use case is but in this case just to demonstrate it is to show you oh this is the same resource so I'm going to come here. Right so here's our error and I'm just going to zoom in a little bit more so it's clearer for everyone right at the back. I meant a bit much. And you'll notice here that this request was denied you can see that our admission webhook specifically with Gatekeeper is at play. And the message here is that roles should not have get list or watch permissions and you can see that over there. And of course you can define this for specific namespaces so it doesn't have to be something that you do for your entire cluster. And lastly I'm just going to show you an example with the auditing capabilities of OPA Gatekeeper. So if I scroll to the top let me actually come out here. Come out here. Do that. So we've got two constraints here and these constraints represent the different policies. The first one over here is for preventing pods for mounting secrets as environment variables. And the second one over here is actually related to checking roles to see if they have elevated permissions for secret access which is the one we just looked at. So I'm going to click on this one here and I'm going to scroll down and zoom in slightly. And you can see over there there are two violations detected and it lists the specific pods. Mounting a secret as an environment variable is disallowed and it points you to the specific pods. So this is useful like I said in the case that these things happen to have crept in into your environment. And because OPA Gatekeeper is continuously watching your resources it would be able to pick up on these things and then notify you so that you can deal with them appropriately. All right so that's the entire flow. So I think I've got about five or four minutes left so if there are any questions I'll happily take those now. The silence may be an indication of clarity or it may mean that everyone just went to sleep. All right there's some questions here. And something I didn't mention is kind of looking at it from the perspective of having measured risks as it were. Because one could still argue that even following the external secrets operator approach is like leaving breadcrumbs right. Because the sensitive values may not exist specifically in those resources but if someone if the wrong person looked at your external secret operator your external secret resource they'd be able to see references essentially not the explicit things. But specifically to your question I would definitely think it's safe enough to actually have those definitions in your in this case OPA policies or Arrego files because the beauty of that is you're creating custom resources with OPA and so you can be as specific as you want to be for your particular environment. So if it's checking those annotations checking labels etc. So I definitely think that that's a great guard and the beauty with that as well is like I demonstrated you can have that both in the CI stage by shifting left to actually test before things ever end up in the API request path but also you're protecting your runtime environment as well when you have them deployed. Yes absolutely yes you mentioned the envelope encryption do I understand correctly that the application at runtime will use KMS with an IAM role to decrypt secreted receives. Yes exactly so there would have to be a specific role that has permissions to even fetch the key to begin with before it can decrypt. All right thank you a question about HCD encryption like there is a default setting you can activate to have encryption address so why did you use KMS and additional to that. Yeah right so that was because with KMS you can also manage the keys and so that was just more of adherence to internal policies as well so we went entirely with the approach of we're going to use KMS and we want to be able to have customer managed keys but I'm glad you mentioned that so people know that it's not like there is no setting in HCD to just enable so yeah definitely that that's another approach yeah mentioned is it working I don't care yeah you mentioned the data interface storage interface right so at the time that that wasn't actually released as yet but I'm again I'm glad you mentioned that because that's another excellent alternative the secret store CSI driver if I remember correctly yeah that's another excellent alternative and another tool that you can look into if you haven't already heard of it is bitnami sealed secrets and again this is going to depend on your context so in our situation it was not really going to fly with with the cloud security team but there are certainly scenarios where using options like that can be super useful but the one you mentioned is an excellent alternative Hi so I wanted to ask comparing for example external secret operator with things like helm secrets and integration into deployment like with operator you also have to rely on an additional component or van the deployment it can break it can have sync issues isn't it better to have everything as part of the singular deployment process the integration with secrets with home secrets now with valves you can also reference for example the secret stored in AWS or GCP and external so how would you compare those two options yeah absolutely and I like what you mentioned and have a good friend who like because they were prioritizing for the deployment strategy as well that's why they went with helm secrets instead and so this was specifically based on what we did and not necessarily completely that this is exactly what you should do but enough to just give you good takeaways to consider but helm secrets is an excellent one and helm secrets if I'm correct is actually based on Mozilla sops so Mozilla sops is also a good alternative as well that if you're looking for managing external secrets as well so I think there's time for one more question if there any that's one right at the back in your external secret store you can easily end up with lots of secrets how do you track them how do you back up them and how do you bootstrap them in case your store fails right so I just repeat that let's say you have in your external secret store 2000 secrets just to have a number is this is this store fails at any point how do you have your values back up or if somebody does changes to the store I'm quite sure you want to track this yeah right that's a that's a good one just firstly would those 2000 secrets for example be for one specific namespace yeah okay okay that's a lot of secrets for a single namespace I'll make it 200 just too much to handle it manually yeah yeah and that's still a lot so I honestly haven't looked into that in detail I think it's something worth looking into definitely and I do not want to lie to you so definitely I'd say like it's probably worth it considering the scaling capabilities in that case and looking at what kind of synchronization would work for you obviously because external secrets operators continuously synchronizing based on what the external source is and in the case that there's a crash as it were at least the synchronization covers that to some degree because you know that when your your relevant pods are back up there's going to be a synchronization that takes place but as for the scale of that I'd yeah I'd have to look into that further for that many secrets thanks I think that is it but if any of you want to chat further I'm happy to spend some time with you thank you so much for coming to the session