 All right. Welcome everyone. My name is Michael Peters. I'm an engineer at Red Hat. I work primarily in the security space and emerging technologies. I'm a member of the KeyLine project. If you're not familiar with KeyLine, we'll touch briefly on KeyLine in a bit. But today's talk is zero trust workload identity in Kubernetes is really broad. And all of the pretty much every slide I have in here could be a whole talk in and of itself. Just the ideas and concepts behind zero trust. What is identity? And how that integrates with everything in the cloud native space is tricky and myriad. And so we're just going to be sort of doing a general overview of all that. If you want more information, I'll try to answer that. So I'm going to make some assumptions as we go. If those assumptions prove incorrect, just let me know. Wave your hand and say something. Have me back up whatever. That'll work. So we're talking about zero trust. It's kind of a misnomer. Zero trust actually means zero implicit trust. It's not as, people don't want to go around, like ZT is a better acronym than ZIT, right? So, but when we're talking about zero trust, you can't have zero trust. You have to trust something. But what we're saying is we don't trust things just by where they are on the network. So this is an architectural pattern where we apply security at the asset level, not the location level. So in the past, a lot of things were set up in this castle and moat scenario where the castle is your data center and you're trying to protect it. And you have, you know, a moat and walls and guards around the data center. So your firewalls, your network segmentation, your ACLs, your VPNs. And so everything was focused on this perimeter security. If we lock everything out, then we can trust everything inside. And that turns out to not be the case for lots of reasons. And even when this was implemented, if it was done well, then it was actually a burden and very strict and rigid in how things could be set up. And this led to a lot of the conflict that exists between developers and operations and developers and security people. And when it wasn't super well done, when it was lax, it led intruders in and there were lots of holes in the castle. So as things started to grow and the modern world started to change a bit, we have a whole bunch of things coming into microservices and bring your own devices and API gateways, multi-cloud setups and serverless functions running all over the place. And so your definition of what could be inside of this, these walls changed. And you couldn't just always get a VPN connection between one thing and another or there's a constant battle every time you wanted to bring some new service into your system. You'd have to contact security to set up these tunnels or VPNs or whatever. And so it just became this mess. And you essentially had a world where the walls of your castle need to basically encompass everything, which just is not possible. So we have a larger number of smaller pieces of software, larger attack surfaces, and the old security paradigms of mapping and restricting everything by port and IP addresses just doesn't work anymore. So another portion of zero trust that's important is identity. So it's kind of central to zero trust. And that identity is no longer implicit but has to be very explicit. Identity itself is a little complicated. Like in the real world, when you're talking about your personal identity or how you prove your identity, you usually have to rely on some third party. My government issued ID. Well, do I trust your government? It depends. If it's the state of North Carolina trusting a Seattle driver's license, okay, but if it's me traveling to Paris, they're not going to care that I have a North Carolina driver's license. So how we trust those third parties becomes part of the identity question. And then the old castle and moat scenario, we have a lot of cases where identity wasn't even existent, right? A lot of services could be non-credentialed. It's just that we're both inside the same VPN, so one service can talk to another and we're good. And as we go to zero trust and zero implicit trust, we can't have that anymore. And even in the scenarios where identity was existent, it was usually tied to some sort of shared credential, some secret. So I proved my identity by saying I have this password and you recognize that password, and that proves my identity. And this is weak to insider threats. It's weak to credential compromises, credential leaks. If I can get that password, now I can impersonate you all over the place. A secret rotation is hard to do, right? If that password gets leaked, now I have to change it in all the right places, and if I don't do it correctly, I can cause outages. And then how do we get that secret into the workload to begin with, right? Are we embedding in code, which is obviously a bad idea? Are we passing it around through the environment, which could be leaked in other ways? How are we giving that secret to the workload? And then how do we apply identities to ephemeral things? So we're talking serverless functions, we're talking CICD, build pipelines and things like that, or even just a natural system that expands and contracts under its elastic load. So solving this identity crisis is crucial. Identities have to be explicit. ACLs are based on identities, but not just credentials or locations. And everything has an identity in a zero trust system. People, machines, workloads, everything. So this is where the spiffy comes in. So just how many people are here familiar with spiffy? How many people use spiffy or aspire? A little less, okay. So spiffy started as a project in 2016 by Joe Beta and trying to get organizations to come together and take all this knowledge that we have about identity and security and bring them together into a single project. Spiffy stands for the secure production identity framework for everyone. And they wanted to leverage a lot of existing stuff. So primarily using X509 certificates and JWT, so the JSON web tokens. Preferably X509, those are more secure and can be rotated and expired. But for both of these, there's a lot of tooling available and a lot of systems will take them as identity to begin with. And so we also want to divorce identity just from the concept of identity from the credential and from the network location. And spiffy also tries to solve what we call the bottom turtle problem. Is anyone familiar with this old story about turtles all the way down? Okay. So apparently there's this apocryphal story of a guy giving a lecture and a lecture about the world floating through the universe in that this lady, old lady said, no, the world rests on a turtle in the back of a turtle. And he says, well, what does the turtle rest on? And she's like, aha, it's turtles all the way down. So once you have this concept of I need this secret, well, how do I protect that secret? Well, I can use, say, PKI, a public, private key. I'll encrypt it with the private key and then let go of the public key. Well, then how do I get access to the public key? Or I'll protect that by such and such. So you get in this cycle of I need to protect this credential with another credential with another credential. And so the bottom turtle is what we call our route of trust. There's always a route of trust in the system, even if it's not explicit. So if you don't know what your route of trust is, you're probably in a bad state because you're trusting something, you're putting your weight on something that you don't know how strong that is. And so for a good zero trust system or ZT system, we need a solid route of trust. With Spiffy is our route of trust. Instead of some ultimate password or last password that we try to protect, it lets us put the trust in something solid, which bases the identity not on some shared secret but on the actual identity of that workload and the nodes that it's running on. And we'll talk about that in a second. How does Spiffy guarantee the identity of that ultimate piece? So the Spiffy consists of a couple of things. One, first off, Spiffy is a spec. So there's lots of things that can implement Spiffy. And in fact, in the cloud native ecosystem, there are a lot of things that implement different parts of Spiffy, because they're either consumers or producers of different parts of Spiffy. But it consists of several parts. The Spiffy ID, which is a text representation of the identity, the SVID, which is the identity document, which is a cryptographically verified document that contains this ID. And it's usually an X509 cert or a JWT. We need the workload API. And this is a node local API that workloads talk to to get their identity, to get these, the SVID. We have the trust bundle, which is a set of public keys for that Spiffy issuing authority. And that sort of defines that what we call the trust domain. And then Federation, which allows you to have multiple Spiffy setups that share information across explicitly by sharing trust bundles in the Federation. So what Spiffy is not, it's not designed for non-software. So this is all about software workload identity. So it's not good for humans, animals, artwork, NFTs, anything like that. Like it's just, this is identifying software. It's also not an authorization framework. Identity is necessary for authorization. So you need something to say, what is my definitive identity or how do I prove my identity? But that doesn't say, can I run this workload, right? It's not the authorization part of that. You have to implement that yourself. And there's lots of things that know how to talk about and with Spiffy identities. But that's not what Spiffy does. So it's like tangential. And once you have Spiffy and once you have identity solved, then your authorization actually becomes a much easier problem. So the Spiffy ID is just a URI with a Spiffy prefix. We have a domain, which is our trust domain. So that's everything under this domain is issued by this Spiffy setup. And we trust it. And then everything on the path is the identifier. It can be hierarchical. It could be location-based. Like you say EU versus US. It could be name-value pairs. Spiffy doesn't say what this needs to be. You can do whatever you want. But that doesn't mean you can do whatever you want, because a lot of systems have their own idea of what the Spiffy ID should look like. And since we're talking about Kubernetes, the Spiffy ID will look something like this. With your cluster name as your root or your trusted domain, then we have an s slash the namespace name, slash SA slash your service account. And so this means that our identity of what this workload is is tied to which cluster it's running, which namespace it's in, and what service account it's using. And those are, in most setups in Kubernetes that use Spiffy, our ID for the workload. This means that your service account, you need to be kind of conscious of how you use that and not use that across multiple things that are not the same workload. Or they'll end up with the same ID, and then you can't really distinguish them when you're talking about authorization. And it makes sense that like some pods are going to, a lot of pods are going to be having the same ID, right? If they're part of the same deployment or same service, they're going to be, they should logically have the same Spiffy ID. But you should not, if you're using Spiffy, inspire, you shouldn't be reusing your service accounts where things are not logically the same identity. So that brings us to Spire. So Spire is the Spiffy runtime environment and is the production reference implementation of Spire. And as I said before, Spire is the spec, or Spiffy is the spec, sorry, this is, it's confusing. It confuses me even when I talk about it, most of the time. So a lot of times you'll say Spiffy or Spire, when you're actually meeting one or the other, I'll try to be explicit when I'm talking about that. And a lot of times when you're talking about just the system in general, you'll say Spiffy, Spire, right, lump them together. But Spire is the implementation, the production implementation of Spiffy. And there's other things that can introduce or that can implement different parts of the Spiffy spec, like your service mesh or your whatever. And this is the architecture of Spire. So we have a Spire server and agents. So the agents live in Kubernetes on each node as a daemon set. And what we call the attestation, which is basically some set of facts that we can make provable observations about. This attestation happens between the agent and the node. So in Spire, we want two identities. We want the node to have an identity. And then we want the workload on that node to have its own identity. So the agents and the server work together to do both. So first off, when a node comes up and has a Spire agent come up, it wants to prove its identity to the server. And this could be done in a couple of different ways. And there's different ways this happens, depending on your environment. But it basically comes up with some provable facts about the system that it then sends to the server. And then the server usually by a third party attests to the validity of those facts. And then once the node has its identity, workloads can then communicate with the agent over this workload API and say, now give me my identity. What is my identity? Give me my cert that asserts my identity. And then the agent will query the kernel, usually, and other things on the kernel, depending on the plugins you're using, to find out what is the identity of this workload. And then I send that back to the server to make sure that this workload has been registered. And then I can get this Svid. Now it seems like a complicated process, but there's a lot of good caching involved in that, so it's relatively quick. And then the identity is given back at Spiffy ID and an Svid, that's the certificate or JWT token that cryptographically validates my identity. The Svid is a short lived certificate that will, that Spire will take care of rotating. The agent will take care of rotating that and then notifying the workload when it's rotated. So if you've ever dealt with SSL, TLS certificates and rotation on that, you know how much of a pain it is. There's ways to automate that with let's encrypt and things like that, but Spire can take care of all that for you. And you can make these credentials very short lived on the order of a few minutes, if you want. Obviously there's scalability issues in there, so you find the right value, but it means that if this credential is ever compromised, it's usually dead fairly quickly. So this idea of these sort of credentials living outside of the workload but being attached to the workload, I've heard it referred to as ambient credentials, which I really like, meaning that it's not something shared that the workload has embedded, it's not just there, but it accompanies the workload and it's part of the workload's identity. So as I mentioned before, Spire uses a plugin architecture. So the first set of plugins is communicating with the upstream authority. So you can have Spire to be your CA, your route of trust for your certificate authority, or you can tie it to an existing CA infrastructure if you have it. And then the other parts that are plugins are the node attestors and the workload attestors. And so these would be plugins both on the server and the agent side. So for instance, let's talk about a real-world scenario like an AWS deployment. If you have a Kubernetes cluster in AWS and you have Spire running on that, the Spire agent will query the local AWS API that's available to that node to find out who am I? What is this node? Gathers that information, sends it off to the Spire server, and then the Spire server will also talk to AWS out of bound, out of band, to confirm all this information that it just got from the Spire agent. Once they can agree that they both can get the same information from AWS, they can now say, all right, this node is this node in AWS. So that we can have that identity, now we can issue certificates to that node, and that node can now issue identity based on that node. And so then the workload attestors would come into play when the workload comes up and they'll query the kernel, getting, say, the process ID. If you have a Kubernetes setup, they can be set up to query Kubernetes. What is my pod name? What are the images I'm running in this pod? All this sort of information that it combines together. So in this scenario, the workloads are completely untrusted. The Spire server is completely trusted. This is part of your CA infrastructure, and should be secured just like you would for any issuing authority wherever you're putting your root certificates or things like that in the organization. And the Spire agent is sort of in the middle. It's mostly trusted because it's the one that can issue workload identity certificates, but a lot of its measurements are confirmed by the Spire server as it's doing its work. So the other thing to realize here is that workloads have to be pre-registered with the Spire server, so that certain nodes can only make identities for certain workloads. And so you set that up previously out of band, essentially. But you can do that manually through the command line, but better ways are through automated processes like either your CI CD process during a deployment, or you could also have things running on your Kubernetes cluster that will do that as new pods come up, as new workloads start, that they will then register themselves with the Spire server, and then the workload is registered when it tries to get its identity. So now that we've broken some ground there and talked about sort of these fundamental things of what is zero trust, what is identity, how do we get it into practice? Well, in most situations that you come into, you're not going to be green field, right? You're going to have a lot of legacy systems that have usernames and passwords and bearer tokens or other sort of secret credentials that you have stored somewhere. So how do you get access to that? Well, Vault is a very common one, some sort of password management system or a secret management system. So how do you then integrate this with your existing workflow? Well, Vault can use X509 certificates as its identity, and you can configure Vault to trust the Spiffy ID as part of the certificate. So you can put an ACL in Vault that says trust this particular Spiffy ID, this specific Spiffy ID coming from these certs that you then validate the cert and prove that it's valid in your trust domain, and then they get access to the secrets. So now you have these workloads that have no embedded secrets that can talk to Vault and get the secrets they need for talking to third party systems if they need to. There's another project called Spiffy Vault, which lets you read secrets from Vault based on, from inside of that current process based on that process is Spiffy Svid. So if you think of it like the scenario here that's really common that this solves is I have a CICD process, it's a bash script, but it needs to be able to get some secrets from something, and then so it tries to pull those secrets from Vault, but I don't want to embed the Vault secret into this workload. So this, when this script starts up in my CICD process, say in the Kubernetes cluster, it can get its identity from Spiffy, use that to get the credentials from Vault, and then be able to execute Vault command line utilities as if the password was already there for Vault. So databases will work in a very similar way. A lot of the most common popular databases will allow X509 certificates to be used as your identity. So when you configure this, the details vary by database engine, but essentially you configure the user to be identified by different criteria on the certificate. So you install the Spire Trust Bundle, that's part of your Spire issuer. You take the Trust Bundle, you install it in your database engine, however that happens, and then now it can validate that the certificates were signed by a Spire trusted authority, and then you can do things like say that the issuer needs to match whatever your Spire root is, and then that the subject name of the certificate also needs to match the Spire ID, or the Spiffy ID, which is that URI that we talked about. So then you can tie Spiffy identifiers into database users, and now any workload with those identifiers will just magically be able to connect to the database and have it all work. Another very popular integration here with Spiffy Identifiers is your service mesh. I'm assuming most people here know what a service mesh is, but just everybody raise your hand if you know what a service mesh is. All right, so just very briefly, it's the dedicated infrastructure layer that does service-to-service communication, and a lot of the nice features that a service mesh provide, service discovery, load balancing, failover recovery, encryption, and security policy enforcement, and there's usually some API to control some data plane and control plane and things like that. The most popular ones, Istio, Linkerd and Console, and of these specific features that I mentioned that server mesh is provide, encryption, and security policy enforcement are the ones that are really important to something like your identity. Most or all of the service meshes out there have some concept of identity or their own concept of identity, or maybe they piggyback on Kubernetes identity attributes, but they don't go as far as Spiffy does. We talked about those attestation features. Spiffy doesn't just trust that the Kubernetes service account is right, it actually interrogates Kubernetes, it interrogates the kernel processes and then or your node deployment on AWS or bare metal or even to the hardware TPM if you wanted to, so Spire can do these deeper attestations of what your identity actually is, and we want to be able to leverage that in our service mesh. So when we're talking about Kubernetes, the most popular service mesh is Istio, which was a project started by Google, IBM, Lyft, and using the Envoy proxy, designed to be Kubernetes native, but also to work in non-Kubernetes scenarios to be platform independent. So as part of this communication between services, most service meshes will do MTLS, so mutual TLS connections between them and the nice thing about that is we have Spiffy being able to issue these X509 certificates, which can be used then as the keys and certificates for encryption here. And so when you can configure Istio to use Spire and to use the Spire secrets, there's a secret discovery service API in Envoy and Istio that allows the Istio sidecar to talk to the Spire agent and get these secrets for this particular workload so they can share them. And then Spire will take care of rotating those secrets as well. And so this lets you use the Spiffy ID, and because we're using Spire, we can go further and deeper into more than just service account. We can do things like make access stations based on the pod name, the container image, the Kubernetes labels, the annotations, and so we can use these deeper infrastructure attributes and things that we've attested to to then have policy enforcement at the service mesh level. So switching gears a little bit here, talking about supply chain security. So Sonotype puts out a state of the software supply chain report every year. And since 2019, they've had an average of 742% year over year increases. That's crazy high. And it's getting worse. And it's going to get worse. I think part of this is not just that, well, it's because as we get more mature as an industry, our runtime environments are getting more and more secure. We're having less and less holes. But it means that they've gone looking for other places. And I don't know about you, but I've never been in an organization that put as much love, attention, and money into their build system as they do in their production system. And so that's where the attacks are going. And also, I think hackers also see the benefit of supply chain breaches because you can get something early enough on and have a far reaching consequences for any hacks that you might do. If you can compromise a low level library that's used all over the place, like log4j, then you can reap those rewards in lots of different ways. So who here is familiar with Tecton as a project? All right, not as many, but if you're familiar with supply chain security stuff, Tecton is a big part of that in the cloud native world. But Tecton is a Kubernetes native CICD system, or a framework for building CICD systems might be a better way to say it. So like in Kubernetes, everything is YAML objects. And Tecton also has some umbrella projects like Tecton chains that when put together give you first class security features like signed provenance and hermetic builds. And signed provenance basically means that every step of the build is signed and can be cryptographically verified later by someone else. So going further with that, where does Spiffy come into this? Well, there's something called Salsa, S-L-S-A, which stands for supply chain levels for software artifacts. And it's basically a recommendation system for software, recommendations for software build systems. And there's different levels. And as you go through these levels, there are stricter requirements about how your builds are done and the security controls around artifacts that you're producing. Level three, which is the second highest, wants to have this one requirement non-falsifiable provenance. Basically means it's not just enough to say that this artifact was signed, but how do I know some step of the process wasn't compromised along the way? So who cares if I get a binary that's signed if something was injected in the middle or the build process was changed in the middle, right? So Tectron changed because of just the way it works with Kubernetes pods can't guarantee this. It can guarantee the provenance of the build artifacts, but not that something could have modified one of the processes along the way. So it can guarantee the steps between the processes, but not that something didn't modify the task as it was running. So for that, you need something outside of that. And that's where Spiffy comes in. So there's what's called TEP 0089, which is the Tectron enhancement proposal, which uses Spiffy-inspire identities on the task-run pods in Kubernetes that use these X509 SVID identities to sign each task-run. And so you can tell before and after if the task-run was modified. And so it's not just the outputs, but the run itself, that something didn't modify as it was running. So this work is ongoing. Parts of it have been merged, and parts of it are still going. But this is a feature that I'm really looking forward to in Tectron. And then as, one, as Tectron becomes more popular and people are using it to replace Jenkins and things like that, people will just get these features by default. If you can say out of the box, because I have a Spire server and I've connected my Tectron build to my Spire server, I now have Salsa 3 level build system, which is quite impressive if you can get that out of the box. Another project that can integrate with Spiffy in interesting ways is Sigstore project. If you haven't heard about Sigstore, I don't know where you've been in the past couple of years, but it's everywhere. Pretty much every conference I go to that's mentioned somewhere in a keynote. But Sigstore, if you're not familiar, is an open source project that handles assigning verification and checks for provenance. And this is, as someone mentioned in one of the keynotes, this is probably something we should have solved in 2005 when we think about it. And lots of different projects have tried to solve it in lots of different ways, but never in a really robust, easy to use, and cryptographically verifiable by every one system. So lots of big companies are working behind or working on Sigstore, Google, Cisco, GitHub, Red Hat is one of them. And so it has a lot of this integrations with various build systems and packaging systems, including Tecton chains that I mentioned earlier. That's one of the ways that Tecton chains proves its provenance by putting signatures into Sigstore. So there's a couple of ways that we can have integration between Spiffy identities and Sigstore. So if you've ever used Sigstore, one of the cool features about it is something called keyless signing. And so Sigstore can integrate with an OIDC provider, which is an open identity connect, and can let you, once you've proven your identity to some OIDC provider, say Google or Facebook or GitHub or whatever you want to choose your internal identity provider, it can then use that identity to sign the artifact, produce a temporary key that only lives just so there's long to sign that artifact and can tie it back to your identity and then throw the keys away. And so no one can ever reuse or compromise that key, and you can guarantee it was signed by the person who owned that identity. But this means that a person has to be there, right? So when you're talking about an automated build system, there's not necessarily tied to a person. And there's not a person at every time a builder is running to sign into an OIDC provider with their credentials and say, okay, I signed this artifact. So we want to be able to do this in an automated way. So Sigstore can use Spiffy as its OIDC provider. And so you can tie in Spiffy credentials or Spiffy identity, and that's what's used to sign it. And so it can create this temporary certificate tied to the Spiffy ID and then put that into Sigstore. So that's how we're using this trust model where Sigstore is basing its trust off of these Spiffy IDs, but let's go the other way around. There's a new experimental feature that's been merged into Spire that allows Kubernetes workloads to use Sigstore to verify container images. So as I talked about this before, when the node attestators are running and trying to verify this workload, it can look at various attributes of that workload in Kubernetes, a pod name, what image is running, and things like that. Now, with these features, it can say also, is the image that's running, does it have a signature in Sigstore, and does the signature, the identities that sign that signature, do I trust them? So, for instance, I can say, in my cluster, I only allow images that are signed by impeters, sorry, impeters at Red Hat, to get Spiffy IDs and Spiffy credentials. And so you can sort of complete this circle and say these containers have been signed, sorry, these images have been signed in Sigstore, and now my identity provider can trust or can link to that to the container running that specific image that's been signed by identities that I approve. Does that make sense? It's a little circular because we have both things that they can use the other as trust, but this gives us a nice completed circle of the build system. So I talked about this before, this plug-in architecture, so now the node attestator can reach out to Sigstore as part of its, sorry, the workload at attestation can reach out to Sigstore as part of its attestation. So I mentioned in the beginning that I work on a project called KeyLime, and KeyLime is, it's really hard for me to give a talk without talking about KeyLime because I think it should be used everywhere, but KeyLime is a CNCS sandbox project that provides remote boot attestation and also runtime file integrity attestation, and it ties it back to a hardware root of trust, and basically what this means is that we can create policy based on your measured boot, so as your machine boots up and records different things, and the kernel knows how to record those, and your boot loader knows how to record those and put them inside the hardware TPM or software TPM or cloud TPM or whatever, but these cryptographic devices that let you basically create hash of a hash of a hash of different properties, and so you can make guarantees about those hashes and then use those to verify that nothing has been tampered with along the way. Great talks about KeyLime out there, but essentially what we want to be able to do is say has this node been tampered with, or if you remember we talked about the spire agent, we're trusting the spire agent to a certain degree to not have been tampered with, well how do we make guarantees about that? Well KeyLime can make guarantees about your spire agent not having been tampered with, or anything on your system. And so there's a couple of ways that we can integrate this with KeyLime, one like we talked about with ServiceMesh, we have the MTLS connections can be secured through spire and that's fine, that's a very common way, but the other thing is that, and this is an avenue I have been thinking about, if somebody finds this interesting let me know and I'll work on this, but using KeyLime as an attestator plugin in spire, so that when spire is doing an attestation on the node, you can tell it make sure, or have the spire agent gather information about the hardware TPM or software TPM, whatever, about the TPM and about the KeyLime agent. And then the server side of the attestator can then query the KeyLime server and say it does this match, do I know about this node? Is it the TPM, a valid TPM manufacturer, but also has the node pass attestation, has anything been tampered with on this node? If anything has been tampered with on this node, then I'm not going to let it issue any identities. And so this would also again give this nice sort of closed loop on, now we're not just trusting spire, but we're trusting the hardware measurements inside that TPM as our root of trust for this whole system, for identity system. So it would be very similar to what I showed for SixTor, we would have the node attestator on the server side be talking to the verifier to verify that everything is correct and the node attestator on the agent talking to the KeyLime agent to get information about that TPM and the hardware and the attributes of the KeyLime agent on that node. So if something modifies your boot sequence, if someone injects a kernel parameter that you don't approve of, if somebody modifies a KeyLime agent or some file on your system and that you're not okay with, KeyLime can fail the attestation and then when spire comes up to try to issue an identity for that node, it'll say nope, sorry, the node does not pass attestation and so none of the identities would work and none of the credentials could be compromised. All right, thank you. So I know I covered a lot, a lot of different systems talking to each other, but do you have any questions? Okay, thanks.