 How many of you are familiar with Spiffy? Cool. The ones that don't, just for comparison of hands, like in the crowd. How many of your security professionals? Okay. How many developers? Platform engineers? Cool. We should have something for everyone. Spiffy has been around for a little bit like since 2017 was the first commit. This is not going to be your typical Spiffy session. It's not going to be a deep dive on implementation details. We are going to talk about many different dimensions, but it's primarily going to revolve around the value prop of the project. Why is it useful? Why should it matter for you? How can you apply it? Hopefully, we can also learn from you like what you extrapolate. If you think of novel ideas, we would love to answer as many questions as you have, but you're also here to have conversations. Fred? Cool. With that, one of the things that we want to try to cover are real-world Spiffy scenarios and outcomes. We've had the opportunity over the past few years to be involved with several colleagues of ours in trying to work out a path towards what works, what doesn't work, and try to work out what those use cases are. Part of our hope is to be able to give you some of this information on some of our experiences that may help you along the way. With that, we're going to enumerate a number of observations that trace back for as long as we think people have tried to solve production identity issues and why is there so much technical depth and what's the job to catch up? So if you go check the RFC for the internet growth, you're going to find many considerations, but one that's not present is security. I screenshot it, all that they covered is the one line. From there on, we're at a point in history where software not only runs on individual servers, they're very likely managed by a cloud provider. It's likely given European mandates that you're not only doing multi-cloud, you're doing many multi-clouds, and that you're trying to address cross-authentication issues. It is a very interesting time also that there's like a high-speed network that interconnects our modern life, our modern economy, but at the same time are hyper-connected to adversaries and hackers. So in terms of trust, part of it is how do we actually develop to a point where we can start trusting systems and people in the processes? It's a really difficult thing to take into consideration, and part of the reason why is that many of the assumptions that we've had today are very different from the assumptions that we had five years ago, ten years ago, twenty years ago. So one of the realities about most computer systems that are run by companies is that whenever they first develop their computer system is what they're native at. So if you develop your computer system when mainframes were the thing, then the only native system that you have is probably going to be mainframe native. You're playing catch-up with everything else from then on. So when we start looking at trust, we have those changes of assumptions over time and those changes in assumption like going from on-premise to cloud as an example. Like what is changing those and trying to develop and maintain that trust is important to look at those assumptions and to realize what is changing and what is not changing. With that, another observation was, well, we know there's going to be vulnerabilities in software, but we'll just tackle them as they come. We'll hit them like a fly, but it's more like trying to be a beekeeper of a massive honey production property. On average, the National Vulnerability Database reports more than 15,000 new software vulnerabilities. So it is a given that you're going to be struggling to just making sure it doesn't get compromised, which begs for novel approaches. With that as not, we can't trust the software, we can't trust the people either. While we have capacity to reason, our decisions are often emotional also. People make mistakes, there's human error, people also get upset, and those individuals may have access to every single high value asset within your organization. There are tens of thousands of successful attacks and breaches that have a common denominator of either phishing or an exfiltrated credential. So that's another thing to keep an eye out for. But while we're looking at the landscape, sure there's multiple platforms, multiple cloud boundaries, and it's just like we can't longer just put in walls because people may within good intentions, like in order to conduct their jobs, we may be accessing resources across boundaries and these are legitimate. We should also be able to determine with certainty what is from our people versus what could be a potential compromise. So part of it is that as we start to move towards the new environments, there's a couple of trends that we're starting to see. So some of the trends include we're expecting more development to occur in shorter timeframes. People are saying there's a lot more newer technologies coming out so we want to be able to shift faster, scale faster. And simultaneously while this is happening, we're also looking at an increased level of communication between different environments, different systems. There's no longer the company and a few applications within it communicating but we're looking at massive quantities of information and work being done with other groups. And so now what's happening is that the blast radius of a given exploit is now massive because once again entry into an area, then at that point they can often jump from environment to environment until they get to the thing that they actually want to get to. And so what this means is that the entire path of heading towards primary defense is starting to become untenable because of these changes that have occurred. So we have to start looking at how do we actually drive this towards something that's much more fine grained, that if there's a breach that occurs that the reward from that breach is reduced because the architecture of the systems, the applications themselves have been designed specifically to limit and not trust fully the systems that they connect to but instead are looking at what am I talking to, what am I allowed to communicate to them and to also work not only to have this fine grained but also to do so in a very dynamic environment because it's no longer an environment where you sit down and your program acts control-less and then they last there for several years. We're getting to the point now where we have to be able to make on-demand decisions for every request in order to handle the changes that we're seeing within the industry. And the ones in the audience that are engineers, application developers, you know there's a trend of you're not only being asked with the business logic, you're also being asked with the availability, the performance, and there's a current trend around DevSecOps where we're also expecting to shift security left. So all of these little people are trying to wrangle things like you could be all of these folks or this could be the different people in your organization but we are not keeping up, we're barely treading water. So to get a little bit more into the substance we talked about like well we can't have perimeters like the perimeter evaporated with the cloud, access control, secrets management and identity are tightly intertwined. But this perimeter is pretty soft like managing secrets to scale requires proper authentication, proper authorization. You need to prove by a possession or recognition technology who you are. So how and does massively large infrastructure can we tell with certainty something we've never seen before if it's rogue or if it's legit? And how do we transition to evaluate and say well this is something I've never seen before therefore should have no access to this is something that I know very well and therefore here are its keys. But when we issue the keys or we use secrets management and we map policies to say this workload will have access to the secret we start encountering a problem of infinite regression which is for every secret like vault is great other like secret stores are awesome until we have to prove to them who you are. Because then you need another secret. It's the problem of secure introduction, how do we do that? And for that secret well let's protect it, let's encrypt it but now we need a decryption key. And this repeats over and over until ultimately we have a paper key that we're very likely putting into a fiscal bank to protect it and that's the root keys of our organization. Also don't forget as well that the moment you have to rotate some of these keys because one of them is compromised then you may not even know where all the keys are that you have to go off and rotate. So you're left with this option of do you rotate the key and cause possible downtime if you don't have that good tracking or do you end up rotating and possibly bring down a certain system as a result of that if your tracking is not done well. It's not an easy problem. So enters Spiffy. And Spiffy starts with a man many of you have come across this week or know his accomplishments and what he has done to modernize infrastructure and development platforms. Joe Beda, one of the co-creators of Kubernetes at Google having left Google and figuring out what he wanted to work next he evaluated what other systems existed at Google that could follow the same pattern as Kubernetes of let's externalize it, make it an abstraction for everyone else and open source it. And he stumbled upon a system at Google called the low overhead authentication system. Deep dive of that is outside of this scope but Joe presented a great talk of who's calling if a service comes up how do I know what it is if we're looking at things with IP addresses or certificates but the certificates have been reshared. This kicked off a movement and convened people from large organizations that have solved the problem for themselves but understood the value prop of doing it and the community doing it openly and gathering at the Netflix office individuals from Netflix, Facebook and Google proposed what were the key attributes and virtues if we're going to build a universal PKI that is fully automated, high velocity how should that look like. From there on the community gathered and like the first commit it was in December 2017 the project was accepted at a sandbox level in the CNCF in April 2018 since then it's had great traction and adoption it is the highest trending rapidly growing certification protocol for cloud native projects we see it in Istio, we see it in Envoy if you go to the Spiffy repo and the Spy repo and check the adopters markdown file that list grows regularly we've also seen it deployed by very large organizations that we've come to rely on we rely on GitHub for our jobs some individuals may like Tiktok from ByteDance if you use the square payments or if you hail a car with Uber all of their services, all their interactions between the microservices that make up their applications are Spiffy protected and this environment is scaled from a few dozen hosts to a thousands, millions of hosts at a global scale so part of it is when we start looking at where company from a company perspective we have some good examples of other problems that mirror ours so for example historically if you look at application development applications and users they were very much centric to the application, every application maintained their own database we're sort of in this position today with Kubernetes where every Kubernetes cluster keeps track of its own workloads and it's very isolated to those particular environments over time we worked out we can extract that out into its own identity provider for those users and at that point we can get single sign-on across systems so if you think of ByteSpiffy it provides a set of standards that allows you to get the equivalent single sign-on mutual TLS authentication across the board and then we can take it a step further because these are very well defined standards and start looking at how do we federate across well we have two CAs and we can get those CAs to federate with each other at that point that gives us to an area where if my organization if you trust my organization to test my workloads properly and I trust your organization to test your workloads properly then we can federate those top level CAs and that gets us a global scalable environment where we can reason about the identities in a secure way I'm sure most of you will understand this very well but leaving KubeCon what I've seen the majority of people who understand the technology well is how do they convince the rest of the business so we're not going to play slide that karaoke for this but the slides will be made available why would it matter to an executive how you do that top-down sell why would it matter to the cloud provider you consume from and the different services if they put this on their service mesh they get mutual transport layer security for free in one fell swoop Amazon Web Services app mesh implements aspire to accomplish that and there are many others that have also followed suit we've included a few more why would it matter to the infosec team or to security engineering why would it matter to dev ops practitioners there are a number of these that we're going to rehash further down while we're here and given like the objectives around regulations compliance that exist in Europe there are couple compelling benefits that will appeal to you the implementation the reference implementation of the spiffy APIs which is aspire make sure that ensures that identities are non-reportable I think I'm saying the word right how would you say it Tiffany repudiated non repudiated thank you so there's like native encryption you get all traffic in motion will be encrypted which is a problem add an application layer going back to the Google story of why Google did this when the NSA leaks happened and organization claimed that they were able to intercept their dark fiver their fiver optics between their data centers and it's new their traffic Google executives issued a company-wide mandate that all traffic had to be encrypted at an application layer because the network was hostile they couldn't trust it so there's that compliance and auditability if you want to talk to the GDPR angle Fred yeah so when you look at GDPR and location so part of it is you have to be able to tell where things are you have to be able to tell who you're communicating with so this gives the ability to provide that to provide those identities in such a way that you can help reason about some of these based upon you have other factors it's not just about the certificates you have to have constant re-verification that identity is where it is so for example one of the one of the key features that we see in many specific implementations is that the keys are kept the lifetime of the certificates are kept very short so the default out of the box is usually one hour so the keys get rotated every half hour or so so what this means is you have to constantly re-validate the environment and those re-validations could include things like am I in the correct geographic location am I do I have any CVEs that are at a certain level that I need to go and report on so having that constant re-validation helps a lot with compliance because it's no longer a point in time am I compliant and then that system may stay up for months or years on end it provides an opportunity to have constant re-validation of the environment that it lives in and the context that it's in and it drives you towards that ability to make decisions on the fly as to whether you want to continue accepting the risk just to add to that there's a lot of flexibility and customization for how you design trust boundaries and trust domains and having the ability to have multiple routes of trust and ensuring as we see in the picture that data in America may travel to clusters and deployments in Europe but ensuring that data in Europe stays local stays sovereign, stays on temperable there are savings you have witnessed this yourself and so generally when you look at what executives look at from a financial perspective there's usually two aspects you have CAPEX and OPEX so usually what ends up happening is we see there's an increase in CAPEX because you have to go off and build infrastructure or make that shift in infrastructure but over time as you start to get to a more automated environment it provides an opportunity for OPEX and part of this is through the enablement of automation so one pattern that I've seen is when we look at what the current identity of most systems is, it's not a cryptographic identity it's actually the IP address import combination so that means every decision has to be made upon what IP address imports are allowed to communicate with each other and so when it comes time to make a change like hey we're going to sunset an application you have to look at all of those from the access control list on your firewalls and worst case scenario the developers for the first application asked for permission and then did not for the subsequent things that go through so what this allows us to get to is to get to a place where we're able to analyze the cryptographic identities using things like Envoy, OPA, Qerano and similar that I've integrated in Spiffy Support they're able to make those decisions and so when you make a decision that hey this application is being sunset and it's actually been shut down or needs to be mitigated or needs to be isolated because of a compromise then that automation ends up paying off through the OPEX side of things so there's significant advancements that from the automation perspective that you can get out of this. Along with automation the big promise of automation has been developer productivity and efficiencies but going back to learning curves if you have to be an expert and the native controls of Google Cloud and then you also need to learn AWS IM it's really hard to cross deploy and you need to hire individuals that are certified in these different areas but if you're able to reason holistically about a single identity control plane that abstracts all this underlying implementation details you gain those efficiencies. We included a chart here where we see other gains I'm not going to even for all of them I'm not going to enumerate it but if you want to take a picture I see phones out you can also check the slides online. Also not having if identity and transport layer security and cross cloud authentication is being provided by a function of the underlying infrastructure and it's not something you need to think about. You don't need to think about revoking and certificate revocation lists or you don't need to think about as Fred was alluding earlier how do you go rotate force rotate because it's happening automatically and identities can be aggressively short lived and you get that by having the underlying infrastructure just the same way that Kubernetes does the orchestration for you and you don't need to think oh I want I want like to monitor or I have to keep an eye on the time a node dies and I need to reschedule the container just the same way that's liberated us from doing that infrastructure management through automated APIs that's what Spiffy does for identity. There's also a very specific one of the reasons we've been able to keep some of the costs down in this path as well is this is not a new form of let's go build a new encryption scheme let's go build a new format what it is is it's leveraging standards it's using the X519 so the same thing that you see browsers use it integrates well with Mutual TLS and Mutual TLS is designed specifically for X519 certificates so it's not like we're bringing in some esoteric set of protocols and say use this thing in replacement of things like TLS instead we're saying go use Mutual TLS go use the standards what Spiffy is providing you is two primary things the first one is a very well defined document so that you can reason about other identities that may not be part of your immediate infrastructure the second thing is providing is a set of APIs that define how to automate the rotation of those certificates and how to how to automate the delivery of that information so by focusing primarily on those two things and saying we're going to focus on maintaining standards it provides it ends up allowing you to use your current existing Mutual TLS implementations that includes evolving standards too recently we have been approached by the crypto agility community folks that are working on post-quantum cryptography because they see Spiffy as the platform that can help them get there over like a near horizon there are more savings again this came out from organizations that have done this at extremely large scale globally deciding to contribute it to the rest of the world and they spend years they spend a lot of engineering resources to build this to put the different integrations and plugins but it's readily available for you to consume so you're saving on that development time and the cost of going on that journey on that path on your own there's all this concerns of like you would have to think about if you do it on your own there are very important questions to answer like who is making the certificates how are they distributed securely where are they stored what happens if something expires and gets jammed up as I said before all these concerns need to be answered there are many people who have dedicated identity management teams and being able to liberate them to like focus on other concerns is what we hear from the community that rewards them the most one of the main things that's come recently was at least in the reference implementation spire was the ability to support cloud based KMS systems and so part of the reason you would want to bring in the KMS is that if the application or infrastructure is compromised the KMS allows you to shut off access to the signing keys and allows you to control and log when those keys are issued out so if you talk with any major organization that has something sensitive to protect almost all of them without exception are using some form of KMS or an equivalent and so spire now has that capability to tie into those KMS systems and so you're not finding directly inspire but instead it's integrating with a richer ecosystem absolutely so last last set of outcomes before we start moving into as scenarios like sure this is awesome how do I explain this to my security team why would they care a big mandate organizations are moving towards as implementing a zero trust architecture but there's no path to zero trust unless when you're breaking ground for the construction you you do a strong foundation of identity this has to be a still thread to support granular identifiable know who the subjects are in your distributed system you cannot do zero trust if you have a black box so once you know the subjects the objects that they access what are the different relationships what are the communication pathways you can make zero trust policy solutions if they are a very defense driven security team you can converse with them about the OWASP top 10 the top 10 most prevalent cybersecurity attacks for web applications the majority 7 out of 10 all revolve around broken access control poor forms of identity intermediary solutions so lots of appeal here and the apis give them a programmatic mean to enforce policies to interrogate the system to determine if things are implemented according to their security policies so very compelling to security individuals and I see some faces like cool so what next like how does spiffy inspire actually work so we talked about when something comes up we've never seen before how do we go to issuing an identity so if a container comes up gets scheduled by Kubernetes the first thing that it's going to ask the inspire agent which implements the spiffy workload apis who am I and the agent will initiate a set of parallel introspection it's going to ask the linux kernel for its metadata what sgad what pit etc it's going to come back to kubelet and say hey kubelet who's scheduled this was it the controller in fact what metadata do you have what namespace what service account token if it's running and let's pick amazon web services it's going to interrogate the aws instance metadata api and say hey what availability shown as the underlying machine in what security group and if everything matches the way it's intended to look the way you've defined it up front registering workloads every time regardless the environment it will it will do this checks if it matches it will do the process of minting an identity through there's a certificate sign request there's distributing it back giving it the other key material to cross authenticate to other workloads so what is like okay but it's a little confusing like where's the split between spiffy and spire I just want to remind you that spiffy is the specification it's the api documents of how do these apis conform by a compliant spiffy implementation spire is actually running this in software there's a few spire components there's a server, there's an agent there's exposing the workload api there's a federation api to talk about across multiple deployments to talk to cloud providers and you see a little half life cycle of what that looks from spec but this is also what spire implements here's a recap of the different components their identities identity documents, workload api there's a trust bundle what I just mentioned so how does it look in practice how does it look when actually implemented running across a number of nodes one thing to be aware of is that spiffy and spire themselves they're not a service mesh so all it does is it provides authentication of workloads and provides a path to maintain that and to validate and learn something about what that workload is so a large part of what's necessary to make it useful is that you have to have integration with other components and so it integrates with things like Envoy where Envoy is able to consume that spiffy identity, it's able to make use of it, able to validate through mutual TLS system that it's connecting to there's integration with Istio where Istio is using spiffy identities in order to determine what the workloads are there's recent work that was done as well that should be in the latest Istio version that also allows it to connect into these spire servers that we described before which helps get us to the point where we're mentioning that identity is an identity provider or the application on topics identity provider helps us move away from that so we get to that place where we can have an external identifiable defendable identity provider and we then are able to hook in clusters into those environments so the key here is it provides a standard way to reason about workload identities a standard way to communicate with those systems to retrieve identity or prove about it and then makes it easy for other applications to be able to make use of those primitives in order to solve a problem in a larger context that is based upon what use cases is trying to be solved and this is not exclusive for production systems at runtime, there's active work for applying the virtues of spiffy and spire to the supply chain ranging from protecting your signing and verification tooling protecting in TOTA machinery if you've seen projects 6-store and Cosign they implement spiffy identity documents there's several areas a great person to talk to Marina Moore in the back maintainer of in TOTA recently published the secure supply chain reference architecture and it's predicated on strong graphic identity to ensure the integrity from source all the way to build and shipping an artifact and going back to the attestation being able to determine has this binary been signed doesn't have a binary signature that I expect to see or not if it's not something that I have the certainty that my builders did it it shouldn't be deployed to be here and this gets really to like a solder whence proof implementation of spire integrating with the projects over on the supply chain the main yeah really good example of that as well is that when you look at spiffy spiffy is designed specifically for that ephemeral identity are you a member of my system right now or are you a member of a system that I trust right now so it is not designed for let's go validate the certificate a year from now because those certificates will get rotated over time and so part of it is like when you start looking at supply chain provenance is like well how do we take something that is now used out to bootstrap the signing process so that something can then sign it at that particular time for the long term and still have enough information there that you're able to still reason about it so there's a really nice collaboration between these short term spiffy identities and hitting the long term needs of like I signed something I need to be able to validate a year or two years down the line that this thing was signed by by that entity so I have slides that I can quickly flip over we have five minutes left I was told that we could hold space but I want to see what questions do we have before we proceed further yes and hang on wait for the mic because we are recording this session and so perfect thank you Matt hello so let's say I'm running Kubernetes and Istio you mentioned before that like Istio uses like spiffy and stuff and around like Kubernetes is essentially like workload provider am I essentially using spiffy and like workload if I am using Istio or I guess Lincord assuming Lincord uses the same yeah I know you want to take at it yeah because the thing with it is that when when Istio first made use of spiffy so they used it to produce the workload identity so you are using spiffy when you're using Istio one of the challenges we've had that we've been working towards is well how do we handle the federation story because from the federation aspect of it that if you don't have that ability to federate across environments and then you're limiting your identity to that specific cluster there are a couple things you can do so you can say well we're going to put a CA at the top that will then sign an intermediate and then Istio will then issue it out so that can help with federation but part of something that we've had discussions on is like how do we get the federation APIs themselves to become part of the standard when you want to communicate across boundaries then the first thing you're doing is you're turning towards a standardized way to approach that and then it means that you're not relying purely on something that's spiffy or sorry spiffy Istio specific to establish like through transit gateways or similar and transferring all that information to and from the systems but that would help provide a way that you could reason even across boundaries where the thing that you're communicating with might not even be in the same company as you so there's some work that needs to be done towards that. There's one more aspect I don't want to talk over but something to underline is attestation without attestation you may have something that looks like a spiffy ID but you don't have the same security guarantees there is recent upstream work led by Max Lambert from HPE where he implemented the interfaces to very elegantly swap Citadel which is Istio's native identity system for Spire and in the past it was quite cumbersome and across releases it would break but we've gotten that I had also opened an issue that the team over by Tetrate they have a booth in the expo floor implemented Federation both Envoy and Istio to achieve cross cluster MTLS and authenticate on mesh to off mesh so if you do just the spiffy within Istio the legacy implementation all you can do is within cluster with a lot of heavy lifting cross Istio cluster but if you do Spire you can have non-side card non-service mesh traditional workloads running on bare metal that are spiffy identified cross authenticate and really blurs that service mesh does that answer your question? Yes it did and also there's some workloads that I'm thinking about back at the office that are going to benefit from because they're not in the service mesh at the moment with that sort of stuff so super appreciate it the other station that you mentioned as well could actually tie down to like a TPM so if you want to say these particular things the hardware literally came from us and we can prove it like those other stations are very powerful in meeting that as well what other questions do you have? We have another question here in the back and then you were next off, does someone raise their hand back there too? We'll go here first So let's talk turtles how did you solve the problem to secure the agent that tells the containers what their identity is so if there is a attacking agent how can I be sure that he Great question so out of station occurs at two levels we didn't cover note out of station but it is the process of verifying the authenticity and integrity of the underlying machine whether that is Bermetal using an X509 pop from the root of trust or that is a cloud virtual machine there are a number I will happily share with you the threat model we have done both the Q53 security audit where they scrutinized the project also the team over NYU 10 labs had insert the security boundaries of what if we have a rogue agent can that agent compromise the server can that agent compromise the workload happy to have that discussion if you want a sidebar but it is a little bit elaborated and I rather illustrate it on slides that I don't have on this deck Brandon Lump hang on it's okay if you run a little bit longer okay I just want to add to that I think there are some ongoing efforts as well to kind of tie this to the hardware root of trust so you have measurements that get extended to the TPM you have things like the kernel integrity measurement architecture that actually track with a very, very small trust computing base what is being executed and making sure that they get the right identities of what a container is running thank you Brandon there's efforts that are far long there's an existing TPM out of station that was contributed upstream by Bloomberg financial there's also a team of researchers in a university in Brazil that are doing work around SGX and secure enclave and it's pretty far along and they've just galvanized the community there's efforts combined with the confidential computing group and Linux foundation so yeah happy to also share that with you to provide pointers the key to it though is that we don't want to trust only the agent to say oh yeah this thing met the requirements and allowed to craft whatever message it wants the best case scenario is that you have cryptographically verifiable material that can be presented which could be the TPM could be a AWS identity document could be a GCP workload or maybe information from like assigned ability goals can then send into that to help with some of that provenance and so you definitely don't want to have just like hey trust me I'm the agent and I validate where we're good to go so Hey as far as I understand with the workload API at least with Spire when a workload kind of registers it gets given a set of identities what was the reason for doing that because it seems strange to me when you have an identity you have an identity right why do you get given a choice for that I'm gonna try to repeat your question as I understand it you're saying well you're already registered an identity why are you sending this yet like an incarnation of that identity plus some other identity some other key material I'm not quite understanding so like it will give me like different options to choose from as my Spiffy ID so you define a Spiffy ID that may be Spiffy colon slash slash cube con mall level slash Spiffy session this is something that has yet to be born or incarnate but we are gonna deploy it at one point we may have multiple instances if we scale up a Kubernetes service and there's 100 pots for this so you're defining conceptually the kid that's about to be born this is the name that we're gonna give it if the DNA test checks out the paternity test checks out and it's the mom and the dad and it has the height the weight all these aspects then you issue the birth certificate if you may and then ensure that that is protected and that kid in order to go to throughout life that birth certificate converts into a passport or a driver's license and that needs to federate and go through like checks of different countries I'm going a little bit far off with the analogy but hopefully that illustrates it yeah I think so maybe we can talk about it later but yeah we didn't die there's a very there's a very what would be the right word I want to be very precise with my language there's a very deterministic process of order of operations that are followed to issue that I'm happy if you want to spend five minutes to walk through that that was some analogy thank you I want to make sure for the recording where they can find you how they can contribute yes so our website is we actually have a book on there as well so if you like the material that was here it's not super specific to like Spire itself it tries to set the stage as to like why specifically exist and to try to provide some information gentlemen in the back I believe he's gonna like the title the title of the book Fred something about turtles something about solving the bottom turtle that was it so we also have multiple places you can come collaborate with us so we have mailing list we have a Slack that you can come join and of course you're always welcome to grab the source code and collaborate on GitHub and come participate we are a growing community contributing to the start as simply as showing up my own personal journey and an open source I had a lot of imposter syndrome I didn't feel I was like at the level of the experts or the idea that I had if I thought of it someone must have already thought of it they're working on it please come hang out be vocal tell us what's on your mind tell us what are the problems that matter to you together Fred and I have been involved in different capacities but we have Marcus who's been a long time maintainer and contributor to the project we have different folks in the room and we've all come from different angles different perspectives there's also a ecosystem that's starting to develop around some of this as well so it's not just about like spiffy and inspire but also the tooling that's developing around it there's a number of opportunities for collaboration so one example is that we very specifically do not put claims inside of the spiffy document other than some very basic pre-defined claims so there were some very good reasons around like trying to avoid pre-authorization being stuck inside of the claims that we didn't want or the X5O9 itself is also very rigid it's like all or nothing if you put the claims in you can't like select the claims and put them out but there's opportunities around okay well how do we deal with claims how do we deal with the federation or transitive identity so these are real problems super interesting, unresolved problems at an industry level we also need help with our documentation we also need help with tutorials we also need help with like novel ways to tell a compelling narrative so any and all contributions thank you for your time and letting us hold this space and can we talk you into maybe going down to the X5O for the next two hours and as interesting as this conversation was imagine how much more interesting over beer so you can find these two lads downstairs hopefully I mean don't just pin them to this room and not let them get down there because I know there's more questions that haven't been answered yet but I really want to thank them these are two of my favorite members of the community Frederick and I go so far back I chose this top because it's an interesting top book, the bottom turtle I stalked him at couponLA until he signed a copy for me, really good stuff so look up the book as well and can we give him a big round of applause thank you good job