 Okay, so I realized that just had lunch, probably want to just go to sleep now. I'll try to keep this as entertaining as possible within kind of limits constrained by the fact that I'm talking about TPMs. So my name's Matthew Garris. I'm a security developer at CoreOS. We're currently not making any use of TPMs. This presentation was written while I was at Nebula. Since Nebula no longer exists, I'm not using their branding or logo or email address. It didn't seem particularly useful. So I'm gonna start with a kind of quicker overview of what a TPM is. Some of the ways that we're already using TPMs in the cloud and then go on to cover some additional ways that TPMs can be made use of. And this is almost entirely work that we either prototyped or had deployed in the Nebula product, which again, you're no longer able to buy. But the majority of the code that was written is available publicly. There's some integration code that was had to be written, but a lot of this is stuff that you're going to be able to integrate into your solutions without too much difficulty if you would like to. So TPM is a trusted platform module. It's a small device that follows a specification written by the trusted computing group, TCG, who are an industry body with contributions for multiple different vendors, hardware and software. It's intended as a secure cryptographic device. And that is, it's capable of doing things like generating cryptographic keys itself. And it can do this in such a way that the raw key never leaves the TPM. Now, there are two real ways of implementing this. And the way that it tends to be done on more general purpose hardware security modules is that the key will be stored within the TPM, sorry, within the hardware module. TPMs typically don't have enough storage space to contain a large number of keys. And so instead, the TPM will encrypt the key and you'll save an encrypted copy of the key onto the system hard drive. But the unencrypted useful key is only ever present within the TPM. It's intended to be reasonably secure against physical attacks. If you attempt to dismantle a TPM in order to obtain key data, you'll probably destroy it in the process. But it's not intended to be fully tamper resistant. Once you've got a key in the TPM, you can also ask the TPM to perform a cryptographic signature of some data, either data you've provided or some other types of data that I'm going to describe shortly. And something that's often overlooked in order to be an effective cryptographic device in order to be able to generate keys itself, a TPM has to be able to generate random numbers. And the random number generator that's past the TPM is actually exposed as part of the specification. So you can ask a TPM to provide you with random numbers. When TPMs first hit the market, a lot of people were concerned that TPMs would be used to restrict people's abilities to use their computer, that they would be used as a mechanism for locking systems down, ensuring that you had to be running specific software in order to be able to, for instance, run a DVD or visit certain websites. This hasn't ended up coming to pass, primarily because TPMs are just expensive enough that low-end machines don't have them. They're also occasionally awkward to export to certain markets. So as a result of this, the industry hasn't attempted to use TPMs to lock anybody down. The mechanisms you would have to use to distribute keys in an effective way to lock people down also don't really exist except in ways that are grossly privacy-violating. So the industry, again, hasn't gone in that direction. TPMs are basically harmless. Many people make the mistake of thinking that TPMs are cryptographic accelerators, that you can ask the TPM to do your cryptographic work instead of doing this on the CPU, and that this will be better because then you're not using CPU cycles. While that's technically true, you can ask the TPM to do these cryptographic operations instead, it's not particularly usefully true. Many TPMs are based on AC51-style microprocessors or things that are slightly above that. They're often clocked in the small number of megahertz range. I've seen TPMs that can take 20 seconds to perform a single signing operation. So they're not things that you want to offload your crypto to. And they're not secure against an attacker who's got sufficient resources available to them and who has physical access. If somebody is willing to spend enough money to extract the key from a TPM, they will succeed in doing so. So they're not the same level of ultimate protection that you'd get from some hardware security modules. If you're a bank, you probably shouldn't be using the TPM as your root of trust. So on a system which has a TPM, you've got a stack that looks something like this. You have the TPM itself down at the bottom. And then above that, you have a kernel driver. And the kernel driver, for the most part, doesn't know anything about the crypto operations that the TPM can perform. The kernel driver's only job is to take commands that are sent in the standardized format and then put those onto the wire in the appropriate way. This depends on the kind of bus that TPM's attached to. Traditionally, PC-type systems have TPMs attached to the LPC. Other systems may have them on an SPI bus or an SEO bus. There are various ways you can attach to TPM. So the kernel driver abstracts that. Above that, you have the TSS or the Trusted Security Stack. This is a set of multiple internal components, but basically you don't need to care about them. The standard implementation of the TSS stack for Linux is called Trousers. Which is an only slightly unfortunate name. And then at the top of the TSS stack, you have TSPI or the Trusted Service Programming Interface. And software, the applications communicate via TSPI with the TSS stack. And then commands go down to the TPM. The TPM sends stuff back up. The hardware may choose to hide the TPM if it's not enabled in the firmware. Systems often ship with the TPMs disabled by default. If you want to deploy to a large number of systems with the TPM enabled, you're going to need to figure out how to turn those on. The kernel driver will typically be present, will typically auto load. Trousers, if it's installed, will automatically start if there's a TPM there. And then it's just a matter of writing your applications. One thing TPMs have is, one part of the base spec is a set of things that are called platform configuration registers or PCRs. And there's a number of these. The exact number of the TPM depends on which version of the specification is implemented, but it's at least 16. When you boot the system, the idea of these PCRs is to provide a record of what has been booted on the system. So the first stage of the boot process hashes the next stage of the boot process and then writes that hash into one of the PCRs. And the algorithm that's used is to take the existing content of the PCR, take this new hash you've given it, append them to each other, and then take another hash of that. So here's an example. The param default is for the PCR to just be full of zeros. Then you measure a component and in this case you've got this value, you append them together, and then you take the char of that and then that's the value that ends up in the PCR. Now the reason for it to be done this way is that it's effectively impossible for you to program. Once the PCR is at this value, it's impossible to program it to a specific different value unless you've managed to break char one. So for the most part, once a system has said I booted this code, I measured this code, there's no way to go back and say, actually I booted a different code. So as long as the code that performs the measurements, the code that did these hashes is correct, is not itself vulnerable, then you have a record of the hashes of everything that was involved in the boot process, from the firmware to option ROMs, to the bootloader, to the kernel, optionally to the NSRD, that kind of thing. And we can use this in a couple of ways and the first is one that's actually already implemented in OpenStack as part of Intel's trusted compute pools work. This is remote access station and this allows you to have one computer prove to another computer that it booted in this specific state. And the way you manage this is to have a shared secret between the two machines. And in this case it's what we call an access station identity key. The AIK is a key that was generated on the TPM for one machine and you hand the public half off to a server, which is then going to perform the access station. The server, when it wants to request access station, sends a nonce over to the client. The client then takes the nonce and takes the current state of the PCRs and signs them with the access station identity key. And because that identity key is tied to the TPM and cannot be extracted from the TPM, when you get back this signed object, you know that the PCR values it's giving you came from the TPM itself. There's no straightforward way to fake this. And that means that you can now say, well, okay, this computer proved to the server that it had booted the code, I expected it to boot. Wonderful, now I know that this hasn't been compromised at the boot layer. I can trust this to run my VMs. The standard implementation that's used for this is called open access station. It's some code that Intel obtained from the NSA and have then performed some modifications to. It's really, really awful. It's like two megabytes of Java. It doesn't implement TPM 1.2 particularly well. It doesn't actually verify that the AIK came from an actual TPM. You can replace it with a couple of hundred lines of Python. I really recommend doing that instead. The URL here is the Python code that I wrote for gluing Python into the TSS stack. Now, I spent a while trying to find an existing implementation of this because I was sure there must be one. And so I ended up spending some time googling and it turns out that you should not Google for Python trousers. Still, this implements most of an access station implementation. You just need to figure out how to do the RPC stuff. If anybody wants to run with that, I would wholeheartedly encourage them to do so. Moving on from that, TPMs can also generate keys. The top level key of any TPM is called the storage root key or the SRK. Unfortunately, I managed to overload this term in Barbican, but anyway. The SRK is generated when a TPM is initialized for the first time. If a TPM is reset fully, then it will generate a new SRK. Any keys that you ask the TPM to generate will be encrypted with the SRK unless you ask it to encrypt it with a different key. So you can insert multiple hierarchical levels into a TPM. You can ask, you can give a TPM a key, have it encrypt that key with the SRK and then you can have another key that's encrypted with that key. So the middle layer here, those keys could all be, you could generate a key and then distribute it to multiple systems. And they would all bind that key to their TPM. So the copy of that key on their system would be encrypted with the TPM's SRK. But then the keys below that in the hierarchy can all be encrypted with that key, which means you can take those key blobs and you can then distribute those amongst several machines and they'll just work because they're encrypted with a key the TPM already has. So this means that it's possible to construct a TPM key hierarchy in such a way that you can share the majority of your keys between machines without having to worry about the specific TPM that they're rooted to. But knowing that there's no way for anybody to obtain these keys without having access to one of the TPMs. If someone managed to get hold of one of these keys, they still wouldn't be able to use it. It would still be tied to the TPMs that you've provisioned. That's not actually limited to keys. You can in fact ask the TPM to encrypt arbitrary data with the keys within it. So if you choose to encrypt with the SRK, that means that that data is now encrypted with a key that's only present in that specific TPM. There's no way to decrypt this data without access to the specific TPM that you encrypted it with. And you can actually go further than that. This is referred to as ceiling, but you can also bind data which means that when that you ask the TPM to encrypt the key, sorry, the data, you also tell it only decrypt it if these PCRs are set to these values. So that means you can configure a system in such a way that this secret can only be obtained if the system booted the firmware you expected it to boot, if it booted the bootloader that you expected it to boot, if it booted the kernel and RAM disk that you expected it to boot. If anybody tampers with any part of the boot process, this data will not decrypt, the TPM will just refuse. And this is useful for one notable case which is the encryption of system drives. Normally, if you're doing disk encryption, you need some way of providing the key to the boot process so that it can actually decrypt the drive. And if that's in the InnoTileD or something, that doesn't win you much because anybody can just pull the disk out, walk away, and they've got all your data. However, you probably don't want all your servers to have encrypted file systems that requires someone to walk up to them and type in a parse phrase in order to be able to boot the system. That doesn't scale very well if you've got several thousand machines. You would need several full-time staff to do that. That's not ideal. So there's a kind of compromise which is to use a secret that is bound to the TPM and sealed with a specific set of PCR values. And this means that the system will only be able to decrypt the root file system if the correct boot process occurred. If someone has tampered with the firmware, if someone has tampered with the bootload or if someone has tampered with the kernel, the decryption key will not be decrypted by the TPM, the TPM will refuse to. And the system won't boot. And this is, in a sense, a kind of straightforward, lower overhead, remote access station implementation. Remote access station means you can tell whether a system booted correctly because you ask it and it says, yes, I booted correctly. This is kind of more straightforward. If the system was tampered with, it doesn't boot. And there's no way that you're going to run VMs on a system that didn't boot. So that's very secure. A system that's not running software is the most secure system. You do need to be reasonably careful with this in that, obviously, if it's tied to the kernel state, then if you update the kernel or if you update the bootloader, those values are going to change. If you change your firmware configuration, those values will change and suddenly your disk won't decrypt anymore. So as part of your upgrade process, you also need to rebind the secret against the new values. That's part of your deployment story. Now, as I mentioned, if you have a key that's attached to a TPM, then merely obtaining a copy of that key doesn't get you anything. So this is useful for key management in general. And the whole point of wanting to encrypt things is to make it more difficult for people to decrypt them unless they're the intended recipient. And having stuff bound to the TPM means that you can tie secrets to specific computers. And so if you have data that is going to be stored on a single computer and if you only want that computer to be able to decrypt it, then making use of the fact that you can encrypt the keys with the TPM means that, well, that's a win. The fewer places the key exists, the better. If it's difficult for someone to compromise a system, then having the key on a large number of systems is still going to make it easier for someone to compromise one of them and obtain your secrets. So having your key secret management stuff tied to the TPM means that you can use these keys in a way that makes this... Obviously, you still have to worry about someone attacking a system. You still need to have a mechanism for rolling your keys if you think you've lost control of them, but you can reduce the probability that you will lose control of your keys quite significantly. Now, earlier I mentioned that TPMs contain random number generators. So how many of you are either produce software that is used to deploy clouds or you run cloud deployments? A decent number of you. Okay, great. How many of you provide a good source of random numbers to the guests? Okay, that was like five of you compared to 25 before. That's a little bit concerning. Random numbers are really important and VMs are really bad at creating random numbers because when you boot a VM, that's now a piece of software that's running on software rather than running on hardware. And we get most of the entropy in systems from hardware. Once you've got everything suddenly nicely quantized to the host scheduler tick, you get much, much less entropy. Timing between interrupts is now somewhat determined by the scheduler of the host type of visor rather than by the hardware. This is a problem for things like SSH key generation, which is the basis of a significant part of network security these days. Guests that can't generate good amounts of entropy will either spend a long time sitting there trying to generate an SSH host key or they'll end up with predictable SSH host keys. And then anybody who can snoop the network traffic can also decrypt your SSH session. Probably don't want to be in that situation. That makes your customers unhappy. Intel decided to do something about this and added an instruction to recent CPUs called RDRand, which gives you the output from a hardware studio random number generator that's receded with actual randomness every so often. This seems like an absolutely perfect solution because guests themselves, this is an unprivileged CPU instruction. So you don't need any plumbing from the hypervisor through to the guest in order for this to be available inside the guest. You can just call RDRand. Unfortunately, a lot of people are a bit concerned about having a large US corporation with a vested interest in not antagonizing various three-letter US governmental agencies in charge of their random number generation. That's a shame. I understand those concerns. I think they're probably overblown. I think it's unlikely that the NSA would choose to effectively make public the fact that they had forced Intel to backdoor their CPUs just in order to get access to VM's SSH key, but whatever. But TPMs have random number generators. And one of the great things about random numbers is that if you mix two pools of random numbers together, it should be at least as random as the most random of those pools. So if you take the output from a TPM and you mix that with the output of RDRand, you end up with a stream that is no longer, even if someone can make the output of RDRand's deterministic, mixing it with the TPM random numbers means that the final output is no longer deterministic. So mixing these two streams together means that you have random numbers that Intel alone are not able to compromise. In terms of actual implementation, there's two ways you can do this. RNG Tools is a small package of software for the load-solvering system. It's able to speak directly to the TSS layer and it can just pull random numbers out that way, or alternatively you can load the TPM RNG driver. And that then results in there being an additional dev HWRNG device in your kernels and then you can just pull random numbers out of there. RNG Tools is also able to call the RDRand instruction so you can just take the output of dev HWRNG or TPMs, take the output of RDRand, mix them together, and shoot them into dev random and the kernel will then, when it's something reads from dev random, have a good pool of genuine random numbers. And then Libvert supports configuring the system so that this is then passed through to the guests. And there's a Versio RNG driver. When you boot the guest, that driver gets loaded and then there's dev HWRNG that you can mix in with dev random and your system actually has a good number of random numbers. When we implemented this, we went from taking about an hour to boost. If we tried to boost 100 guests at once, that would take about an hour because they was all block waiting for enough entropy to be able to generate their SSH host keys. When we added this, the same number of guests came up in about five seconds. Seemed like a win. In the open stack world, we haven't so far really cared too much about key management as such, but we are beginning to care about secret management and the Barbican project is the component that we're using to do this. Barbican is, it has some ACLs. You go to Barbican and you say, I want the secret corresponding to this value and that secret could be basically anything. The secret may be a disk encryption key or it could be a passphrase or something. Barbican is responsible for storing that. The basic Barbican implementation just has a root key in a configuration file. It reads that boot time and then it uses that to encrypt all the other keys. That's obviously not particularly secure because now you have the key on the same device as all your encrypted secrets and given that information, people can probably figure out how to get the keys back, the secrets back. And that kind of defeats the purpose. Obviously, that's not how Barbican's meant to be deployed but you need to figure out exactly how you are going to deploy Barbican and doing that does mean that you have to think about what you're actually trying to protect against. So, I says, one of the key cases is for using Barbican to store disk encryption secrets and there's integration with Cinder which means that Cinder can call out to Barbican, obtain a key that's then used to decrypt the block store. This already works if you're willing to spend some time playing with configuration. But the point of encrypted storage is to make sure that if someone obtains the disk, they don't have your data. And if obtaining your disk means that they also have the secrets, that defeats the point because if they have the disk and the secret, then they can get the data. So, you have to have a mechanism by which it's possible to store keys in such a way that stealing a disk doesn't give you the decryption key for your secrets. And the reason I mentioned the raid logs here is that the easiest way of doing this is, so you have your clouds controller and it's probably got a raid set up because obviously and those are probably in hotspot base because that's the easiest way to do this. If an attacker were to walk past your cloud controller which obviously means that there's been some sort of disastrous failure of security already, pull out one of those disk casies and then put in another disk. Your operating system will notice that a disk has gone, will notice that there's a new disk there and will then re-sync your radar to the new disk and everything will just carry on working, except the attacker now has a copy of a bunch of your data. And if that included your root secrets, they have access to all the secrets, they have your decryption keys, they can get your encrypted data. So, you should probably be paying attention to your raid logs just in case it unexpectedly says that a disk was removed and they rebuilt the array because otherwise you're never going to notice that someone's done this. TPMs unsurprisingly help us here because you can root your key in the TPM. If someone steals a disk, all they've got is the secrets that can only be decrypted by the TPM and TPMs aren't hotswap, they can't just take the TPM while they're doing that. If someone takes your entire cloud controller, you probably will notice. At the point where your cloud starts working, I would hope that you'd notice. Now, there's two straightforward, well, I say straightforward, there's two ways to implement this. The first is you can just use the existing Barbican code except where it would read a key from a configuration file, you can just tell it to make some other call and then pull a secret out of the TPM that way. As long as you have the infrastructure to provide TPM-backed secrets, then this is about a three-line change to Barbican. Obviously, you need the infrastructure to do the key provisioning and so on. The alternative is to use a TPM-backed PKCS11 implementation and PKCS11 is, sorry, PKCS11 is a specification that defines a way for software to talk to hardware cryptography devices. It has an interface that allows you to ask for a key to be generated. It has a way for you to ask for a secret to be encrypted with a key and it abstracts away the specific hardware implementation. There's a PKCS11 implementation that's part of trousers. It has little issues like not checking the return values from any functions it calls. You probably shouldn't use it because that's a really bad sign in security software. Google make use of TPMs as part of the security story for Chrome OS all Chromebooks have a TPM and there are various user secrets that are rooted in the TPM. They wrote their own, basically, a TPM-based secret broker and it includes a PKCS11 interface. Unfortunately, it's called CHAPS as a kind of dreadful play on trousers. It's also the best freely available PKCS11 implementation for TPMs and so if you're looking at using Barbican and using a PKCS11 interface and if you want to use TPMs, you should use CHAPS. Now there are some trade-offs. A real hardware security module is basically better than a TPM in every single way. TPMs are not fast, which means that you want to perform most of the operations on the CPU, which means the TPM has to hand over a decrypted copy of the keys, which means your decrypted keys are in RAM. In a real HSM, the decrypted keys will only be present inside the HSM. Various attacks are easier. The most obvious one is that if you have swap enabled on your server and if you have the secret in RAM, there's potential for the secrets to be pushed out to swap at some point, it may then stay in swap. If you're doing this kind of thing, you probably want to use encrypted swap with a ephemeral key that's generated a boost and then on the next boot you just generate a new key. That way there's no way for anybody to scrape swap. But it still means that it's easier to pull information out of RAM than it says to pull information out of an HSM or even a TPM. And as I mentioned earlier, a real HSM should self-destruct if it detects any evidence of physical tampering. TPMs mostly won't. They're far too cheap for that. However, there's the old kind of tourism that the best camera to use in a situation is whichever camera you have on you. If you don't have an HSM, if you're not willing to spend the amount of money it costs to deploy HSMs, then the probability is that you have a TPM on your server motherboard, even if it's currently disabled. Using a TPM will improve your security. Using an HSM may improve it more, but using a TPM will be better than not using a TPM. And so if you already have a TPM, if you're not going to buy an HSM, then look into making use of the TPM. There's a couple of URLs. As I mentioned before, the first one is the Python TSS integration codes I wrote. It's basically a, it's pretty much entirely an FFI wrapper around LibTSS with some reasonably nice object-oriented pythons. So, keys are just Python objects and so on. That works surprisingly well. On top of that, you can implement most things you'd want to do with a TPM in under 100, basically under 200 lines of code. So that's nice. Chaps, the direct upstream does not build outside Chrome OS, but someone at Google is taking responsibility for making sure that Chaps is buildable on standard Linux systems. So GitHub, Google, Chaps, Python, Linux, rather than slash Chaps, and that's some code that will build on a reasonably modern Linux system and then you have your pkcs11 implementation. Things will work nicely. So at that point, we have a few minutes left for questions. There's a microphone there, if anybody would like to make use of that. Yeah, how do you export the Vert.io RNG in Nova? Is there some configuration to do it? How do you expose the Vert.io RNG in Nova? I honestly can't remember. I believe that code now exists. I hacked something lastly into our LibVert static configuration to make it work. I probably shouldn't have done that. Yeah, but it's possible to configure Nova to do it. I believe that it is now possible to configure Nova to do that. If it's not, it's a very small amount of code and I should see if I still have the patch and just send it up through. All right, and my other question is, what's the procedure to update a kernel with the TTPM thing? Okay, so if you're using measured boot and you want to update the kernel, you have two choices. Now, if the secret is local, then, so for instance, if you're using disk encryption, then when you perform the upgrade, you need to obtain the secret while you have the PCRs in the current value. Then you need to calculate what the new PCR values are going to look like. So you have the new kernel, you have your new NTRD, so you can hash those yourself as part of the update process and figure out what the new PCR values are going to be, and then you reseal the secret with the new values. So it would be possible to do that when in, let's say, in a post-inscript? Yeah, you can do that in post-inscript. Okay, and my last question would be, it looks like so much better than secure boots, right? So secure boot and measured boot are kind of orthogonal to each other. Measured boot is a way for you to say, what did you boot trusted material? Whereas secure boot is a way to say, don't boot unless the material's trusted. You can actually make use of both of them simultaneously. Measured boot never stops you from booting unless you're using the disk encryption hack, so otherwise the system will still boot and then you'll try to attest and the attestation decision will be negative and so bad things will happen, but the system still booted potentially. Whereas in secure boot you won't boot, but secure boot gives you no way of figuring out what you did boot. All you can say is I booted something that was signed appropriately, but I don't know what. So you can make use of both of them simultaneously. I was wondering why people would have confidence in the TPM random number generator if they didn't have confidence in the closed Intel hardware. Why should you have confidence in the TPM but not in the RD random? That's a good question and the fundamental answer is there's no reason why you should have confidence in one and not the other. But most TPMs are manufactured outside the US by companies that are themselves based outside the US. There's a lot of TPMs from Germany, there are TPMs from Taiwan, there's I think a couple of Japanese vendors. If you're mixing the TPM RNG in with the Intel RNG, then in order to compromise them, you would need to compromise both simultaneously. And if that means that you have to have multiple different, either if it's the NSA you're worried about, then the NSA was also have to have backdoored all the different TPM vendors. There's several different TPM vendors. TPMs all expose the same interface, but they're all designed and manufactured by different companies. If you want you can have a heterogeneous mix of TPMs in your network or produced by different vendors. The NSA probably aren't going to simultaneously burn an Intel backdoor and an Infini on backdoor on one person if they do. That would, it's basically hedging your bets. You're making it more difficult. It's clearly not impossible, but it's now much more politically awkward for someone to compromise you. And you would need to have really, really annoyed somebody. As long as you're not actually, well, I have no idea what kind of things the NSA might consider important, but there are limits here. It's less likely that you're going to be, even if both of them are backdoored, it's less likely that someone is going to simultaneously use two different backdoors. Yeah, you mentioned the open attestation is pretty bad. So can you describe a little bit more about the disadvantage of the open attestation? Consider the feature or regardless of the language issue or the other backdoors? Okay, so I don't think that there are any, I've seen no indication whatsoever that there are any deliberate backdoors in the open attestation or anything like that. My personal feeling having looked at it is that the, it's a large body of code that has not been subject to a great deal of external review. And if we can just reduce the amount of code that's used in security critical components, then that's better. If we can do this on top of existing code that has had a greater amount of external review, then I think that's a win. There's little things like to begin with, open attestation at one point shipped with a web UI that was written in PHP, which is not really the first language that I think of when I think of security management. And that's kind of a completely unfair attack by me. But it's a lot of code. It's written in Java. The number of people working on open stack who can also review Java is much lower. Keeping consistency and language choice here is a win. It makes it much easier to integrate. There's other stuff like it doesn't verify that the endorsement certificates are actually rooted in a real TPM vendor. So anything could be talking to you and open attestation would think it was a TPM, even if it can't prove that it's a TPM. It mostly uses TPM 1.1 features rather than any of the 1.2 features. I haven't gone into a great deal of detail to find specific further issues, but I feel really uncomfortable with the code for the most part. Okay, so I think we're out of time, but I'll be around until Wednesday morning. So if anybody does want to talk to me about any more of this, feel free to grab me. Thank you.