 Hello. All right. Thank you for coming. Welcome to our talk about Apache Mesos Security. We'll go into the security features introduced in Apache Mesos itself, as well as some recommendations for the surrounding ecosystem. I am Adam Bortalon, a Distributed Systems Architect at Mesosphere, a Apache Mesos Committer, and a DCOS Committer. I've been specializing in security and storage in the area for almost four years now. And we've also got Alexander here, another Apache Mesos Committer from Mesosphere. And, yeah, we'll get right into it. So today I'll start off with some of the Mesos Security basics. We've given, I guess this is my third or fourth talk on the topic, various Mesos Cons, so some of you may have seen the previous talks. We gave one at Mesos Con North America last year that this is kind of an extension of. Alexander will go into some of the new and exciting security features since last year's Mesos 1.0. And then I'll wrap it up with some discussion of multi-tenancy developments that are upcoming in Mesos. So, brief motivation. You guys probably care about security, which is why you're here, but who else cares about security? Anybody with any sensitive data? Anybody that has untrusted users? You know, as an operator you can't always trust that your users are going to do what you want them to do with your system. And you can't always trust that your users are not going to mess with your other users. So if you've got, you know, personally identifiable information, you've got legal requirements, you've got operations that should only be for administrators, you may even have multiple clients that shouldn't even know that each other exists on the same cluster. But from an operator's perspective, you want to co-locate them to get the improvements of running all those workloads on the same physical hardware. So, getting into some of the security basics, we summarized last year's talk with some of these, with the basic idea that you need to firewall off the perimeter, you need to encrypt everything you can, add authentication to all the Mesos APIs as well as any others. You need to authorize any action that can perform any modification to the system or retrieve sensitive information. You need to secure your own applications running on top of Mesos, which we described as very much do-it-yourself last year, but we've got a little more help for you this year. And then, of course, you want to isolate all the containers so that even if those workloads are co-located next to each other, they don't impact each other's performance, in addition to not being able to actually see each other. And I'll also mention some of the custom modules and hooks that you can use to extend the Apache Mesos to provide your own custom implementations of these security interfaces. So, if you're going to firewall off the perimeter, you have to poke some holes through for some of the services that you care about, you know, Mesos Master to be able to view the whole state and the UI, Mesos Agents to get at Sandbox logs, ZooKeeper to figure out which Mesos Master is the leader, and then you may need other system services that your operator needs access to when you don't necessarily want them SSH-ing into the cluster every time, and those could run on a variety of ports. Your users may also need to access the framework schedulers, UIs and APIs themselves, as well as any APIs and UIs exposed by the executors or tasks running on the agents. And those could run on a variety of different ports, and so you find that you may end up poking dozens or hundreds of holes in your firewall, and then that's not actually as secure as you would hope. Especially if these services underneath are not properly authenticated and encrypted themselves, we've run into open-source clusters that have marathon exposed publicly on port 8080, and there are metasploit scripts that will jump in and start Bitcoin mining tasks on those nodes instantly. So you need to make sure you firewall off any access and you're actually authenticating access to it, or else scanners are just going to find it and start mining on you. So one thing that we've done in the Mesosphere DCOS product is build an API gateway, which is, we call it admin router for historical reasons, but it's basically just an nginx proxy with some configuration on top. And what this allows you to do is only expose a minimal number of ports outside of the firewall. So you have 22 so that you can still SSH in if you really need to, and then port 80 and 443 are the main ports through which you get into admin router. And then it has routes that can provide access to zookeeper, mesos, masters, other system services, as well as frameworks running on top of mesos. And because you've got this API gateway here that's somewhat smarter than a vanilla firewall, you can actually do SSL termination at the gateway. You can do authentication, requiring that nobody can get through to any of these routes unless they've authenticated with your system, as well as some coarse-grained authorization so that you can say that only operators or people with operator privileges can actually access zookeeper or can access your certificate authority or your secret store or something like that. We've also found it valuable to create this notion of what we call public agents in DCOS. So a certain number of agents that have their own exposed IPs, and then you can run load balancers like MarathonLB or the new EdgeLB on top of that, which will then... So those are exposed to the public internet and the requests to those are then load balanced across different service instances. So if you've got a web server running on 100 different agents privately inside the cluster, but then you want to serve that out to the outside world, you can use the MarathonLB or EdgeLB to load balance between those, but you still, of course, need to... You can do SSL termination at the LB, but you're still going to need authentication or else anybody can access that website. Which is maybe what you want if you're hosting a public website. So you don't necessarily want to require that all users must authenticate in the DCOS or Mesos way just to get access to your web servers or other applications. I mentioned you want to encrypt everything in Mesos. You have to enable libEvent and SSL at configure time, at build time in order to actually get SSL and TLS built into Mesos. The packages distributed by Mesosphere have that built in since 1.0, but if you're building it yourself, take note of these flags. You're going to need to set environment variables to enable SSL. By default, we do not support downgrade, but if you're upgrading from an unencrypted cluster to an encrypted cluster, you'll probably stage it where you allow SSL but don't require it, and that's what support downgrade does. And then once you've got all the masters and the agents and schedulers and executors switched over to allow SSL, you can start requiring it across the board. You'll also need to specify a key file and certificate file. This is how you actually do the encryption. You can verify peer certificates when they're present and require that they're always present. And wait, what was underneath that? Oh, depth. And then you need the CA certificate in a directory or a file in order to actually validate that these are properly signed certs. You can specify whatever ciphers you care about if you happen to be some sort of government entity or other organization that happens to know that some of these ciphers are not secure anymore. I don't know that personally. Maybe you do. We disable SSL v3 and TLS v1.0 and 1.1 by default, so it's 1.2, which is the more recent one. If you need to support any of these for backwards compatibility reasons, you can enable that as described here. And we've recently added support for ECDH curves, which Alexander will go into in a little while. Authenticating agents and v0 schedulers. So those of you who don't know, Mesos started out with what we call the v0 API and with the 1.0 release, we added a stable v1 HTTP API or more HTTP-like than the previous one. Going with the v0 API, you need to configure with SSL built-in, and you configure the masters with an authenticator. Default is cramMD5, but as I mentioned, they're modules, so you can extend this to have your own custom authenticator and authenticate to e-modules. With the cramMD5 authenticator, it's just a raw JSON file of credentials, and you have flags to require framework and agent authentication. On the agent, you have to have the authenticate and its own credential, and the v0 schedulers have to specify a credential when initializing the scheduler driver as well as setting that same principle on the framework info, which is used for authorization. If you don't match those, you'll get an error when trying to register. A credential in this case is just a string that is the principle or the ID that you're authenticating as, and an optional secret. Some authentication mechanisms do the secret management out of band, so you just need the principle, and Kerberos, for example, might use your key tabs and tokens that are on disk instead of passing those around in these messages. HTTP authentication, as of 1.0, we had a lot of these endpoints already authenticated. We recently added authentication to the v1 executor API, which Alexander will go into in a bit, but all the rest of these endpoints are authenticated, so you are required to, by default, we do HTTP basic auth, but again, you can extend that with modules. There are a couple of endpoints that are not authenticated, things like redirect or health, which don't expose any sensitive information and don't allow you to actually perform modifications to the state of the system, so we felt that was okay, and sometimes necessary to know that a node is up before you actually bother to authenticate to it. Authorizing these endpoints comes through a variety of actions, because it's not just these endpoints. In the v1 API, it's a single v1 operator endpoint, and then a lot of different actions you can form in the message that you send to that endpoint. We have deprecated a lot of these with role, with principle action names in favor of a more generic create volume or destroy volume, get quota, where there's an authorization object that has various metadata that you might authorize on. Just because the initial authorization module authenticated, authorized, creating volumes based on the role of the volume or destroying a volume based on the principle that created it, doesn't mean that everybody wants to authorize based on that metadata, so we try to provide as much metadata as possible, like task infos, full resource fields, and that way you can choose to authorize based on role, principle, user that the task is running on, or maybe arbitrary labels that you've tagged onto your tasks and volumes. We added a lot more authorization actions since last year, in addition to taking the v0 APIs actions and extending those throughout the rest of the v1 operator API, we also added new features for things like attaching container input and output for debugging. We had maintenance primitives before, but they weren't authorized yet. Now they are. We also have nested containers for pod-like support, some agent-gone semantics. We now also authorize registering agents, so instead of just authenticating the agent, we actually can allow you to specify which principles are allowed to register as agents, so you don't end up with somebody who just happens to have a scheduler credential being able to use that to spin up a new mesos agent and claim tasks and resources. You can get the whole list of authorization actions in the authorizer.proto. They're all listed there, and if you find anything that you think needs authorization, Fileajira will look into it and make sure that we can clean that up and prevent any unauthorized actions. Last year, we talked about how application security was pretty much due at yourself. Now that we've got secrets first-classed in mesos, which Alexander will talk about in a minute, you can actually distribute a lot of these credentials and certificates and keys in secrets so that when your task runs, it already has its credential that it can use to authenticate with the mesos master for framework authentication. It already has certificates it can use to communicate over an encrypted channel with mesos master or other components in the system. And if you're storing state in ZooKeeper and you want to protect that state from unauthorized access from other tasks and bad actors, you can use Xenode AuthZ, which is basically just a symmetric key, and so you want to be able to pass that symmetric key around so that even if your task dies and spins up somewhere else, it still has access to its Xenode. For example, for leader election for your framework scheduler or any other state you're trying to store. Similarly, you can use secrets for some of the other encryption features that you need to enable. You're going to have to do network segmentation yourself. We're not building all of that in automagically for you by default. On disk encryption, do it yourself. Mesos does allow you to specify which Linux user you're running your tasks as, and you can use that as well as providing file system images in your containers to prevent other tasks from hopping around and accessing your sandbox, but you need to make sure you have a unique Linux user per application, because if everything is running as the nobody user and everything is readable by the nobody user, then every task can read every other task's sandbox. So be aware of that. And then if your tasks themselves expose UIs and APIs, you're going to have to encrypt and authenticate and authorize any access there yourself. If you have ideas for how Mesos can make this easier for you, we welcome them, but for now, we've focused on securing the Mesos platform itself and providing any primitives we can come up with that make this easier for you. Container isolation is incredibly important for, you know, keeping tasks inside their containers and keeping them from accessing other containers, as well as restricting the resources that they're using. So we had several isolators before. We've added quite a few more in the past year. We've got an isolator for App-C. We've got Block.io, CPU sets, Linux capabilities and R-Limits, and the new volume secret isolator for file-based secrets. And, you know, I mentioned extending Mesos with custom modules and hooks. I've got here the list of all of the module interfaces and hooks that Mesos currently provides. I've marked in bold the ones that you might want to use if you're building your own custom security interface for your Mesos platform, much like we've done with Mesosphere's Enterprise DCOS. So you can have a custom Authenticator module pair for the V0 API, Custom Authorizer, so that, you know, you don't have to hand-specify ACLs on every node. If you've got a Custom Authorizer and Authenticator, you can have a central identity and access management store that all of the nodes retrieve ACLs from and used to validate credentials. We've got a lot of these different hooks. A lot of them are more relevant for the Docker containerizer because with the Mesos containerizer, we have this isolator module, which is kind of a misnomer because it does... It's actually just watching the entire container lifecycle so you get access to perform modifications before you launch the container, right after the container is launched. When the container exits, you can monitor it periodically. And so we've found that the isolator module is by far the most used, most extensible, most flexible module for not just security, but for pretty much anything that you want to do injecting before or after during a container lifecycle. Then we've got the HTTP Authenticators, which can help you override the basic auth scheme. And coming up in Mesos 1.5, we've actually built in an HTTP Authenticator as well. So your v1 schedulers, as well as anything else that you're building that might want to access the Mesos HTTP API can include that as a part of LibMesos and use it to automate the authentication with the master agent APIs. And then we've got the secret generator and secret resolver modules, which are new with the first-class secret support that we've added in Mesos recently. So that was kind of a breeze through all the things that we've done in Meso Security in the past with some notes on some recent additions. And I'll hand it off to Alexander to talk about some of the more advanced features that we've added recently. Hey, guys. I will be focusing on the things, the most important things we have done over the last year. So one of the most important things is the executor authentication. One of the issues we were tackling here is a very important security issue that happens when an executor is launched inside Mesos. To understand, I will explain you how it used to be done and then I will tell you what we did to fix this problem. So originally, an agent launches an executor, usually in a container, and it injects the framework ID and the executor ID of this newly-generated executor as an environment variable. Then the executor launches, the container is initialized, and then uses an API to register to the agent. This API, this message will include, again, the framework ID that was given to him, the executor ID, and a PID, which is just like an identifier for the executor. Executor usually is a combination of an IP and a port where the executor usually is listening, like Mesos' secret API is working. Well, not that secret, but it's like a private way of communicating. The problem here is that the framework ID is given, like if you have a long-running framework, you can very easily get that framework ID, and at the same time, the executor ID is given by the framework, so some frameworks, for example, just have an ever-increasing integer number for the executor ID, so you can really guess what the next executor ID is going to be. And if you do that, anyone can claim to be that executor, provided they register before the executor launched by the agent. So we have a fake executor who just guessed, he knows the framework ID, just guessed the executor ID, and just provide a PID, whatever it is, because the agent really doesn't verify that. Once the fake executor is there, the old executor will intend to register. He won't be let because this agent will say, hey, I already know that the executor, and then this fake executor can just get all the information that was intended for the original executor, task definitions, secrets, et cetera. So what we did was assign a unique signed token when we launched the executor. That's created with a secret that you pass to the agent that he will use to sign every token. So when he launched the executor, he will again give the framework ID, the executor ID, and this unsigned token as environment variables in the executor. So now it's much harder for anyone to claim to be the executor that was launched. At the same time, when he registers, he will pass the same values plus the PID, and then the agent can say, hey, I did sign this token with this information. So we not only verify that the token we gave him is the correct one, but the token has some information inside that is signed so the agent can also verify that information. So how we did that? We use JWT-based tokens because we can put anything we want in the payload. So what we add is the framework ID, the executor ID, and the container ID. So these three elements, as of the moment, are put in the JWT-based token. We also sign it with an HMAC-256 hash algorithm, and we use HTTP authentication using bearer scheme, which means this new feature is only available on new executors who use the HTTP v1 API. So if you're still using v0, I will highly recommend you to move to a new API. You can enable it when you launch the agent, giving it the authenticated HTTP executor, and you will need the executor secret key, which is a blob of data used to compute the token signature. It can be a path to a file with this blob of data, or a base64 and call it 256-bit number. So that's for executor authentication. One thing is designed to be used as a module, so you can override the mechanism we use. But some parts of the code still expect to be able to read a JSON part that has the framework ID, executor ID, and container ID. So I guess we still have to fine-tune this authentication so it's override-able completely by you guys, small writers. The other thing we really, really worked on this last year was having secrets as first-class citizens. So the first thing that may pop in our head is what is the secret, and if you didn't attend the talk like an hour ago or two hours ago, then I will try to summarize, and I won't give you the fancy demo that was given before. So a secret is any sensitive information. So you can have your passwords, your secure shell keys, certificates, API keys, and the important thing about the secrets is that they should only be accessed by authorized users. So you, for example, don't want your task definition to have your passwords in clean text because then anybody can read them and probably connect to your database and play with it. So for Mesos, the center structure that managed the secrets is this message secret because it's a protobuf, and from then a secret can be one of two things. It can be a reference or it can be a value. Reference secrets are just a way to describe how to get the secret, so they have a name and they must have a name, and optionally because on secret stores, each secret is just like a hash map of key value elements, so optionally you can just get the key that is stored under that secret. It can also be a value, which is just the unencrypted contents of the equivalent reference secret. That's what a secret is. Now, how we fetch the secrets is based on this interface called the secret resolver. It just has one method, the resolve, and as you can guess, the parameter is just a secret reference. As I mentioned, the name is the only one required. The key is just optional, and it will return a secret value. By default, usually we provide you a default that is more or less usable in each of the interfaces from Messos. I will never recommend to use the default of the secret resolver because he just assumes that each secret that comes in has the value already set in unencrypted, so he will just give you back. Please, if you're going to play with this, create your own module, connect to your safe secret store. Right now, there are many, and we didn't want to force a dependency in Messos. That's why we didn't provide a safe way of dealing with secrets. Also, because we are not secret store experts, so it's up to you guys. So, how does this work? Imagine you have these two secrets. One is a certificate. So it's under the name Certificates API. The key is web server cert. And this won't help you because the certificate is I just created, so don't expect to break into Messos. Plus, it's 128-bit keys, very useless. And then you have the other secret, which is a database credentials UI, which just has a key, a name, and a password so you can connect to your database. So we have our fancy web API that we want to be able to launch in a container with some secrets. So, sometimes you want your secrets to be available as environment variables, particularly like in the case of the password and the database username. You probably want that to be in an environment variable, so the way to do it is when you're doing the test definition in the environment section... I have an error here. The environment section, the environment needs a variable described similar to this, and I use YML so I can remove all brackets so we focus on what is important. We have a name, a type, which is a type secret, and the message secret will say it's a reference with the given name and the given key. So what happens when you launch a task which has this variable? So you receive the task info, the agent takes and pass it to the environment secret isolator, which is enabled by default, so you will never have to set it up in the isolators. This will call your interface, your implementation of your interface, secret resolver, which will connect to the secret store, resolve your secret, and then your isolator will be on charge to put the value of the secret in your environment, and then your task will be able to connect to your database without problems. The thing is you don't always want your secrets to be environment variables. Sometimes you want them to be files that you can read, like the certificate. So in that case, the procedure is very similar. Now instead of creating an environment variable, you will create a volume, and this volume, you will define it also like you give the container planet and the source will be a secret now. And this secret, like in our case, will be the path to the certificate I just showed you before. So for this one, sorry, it's important that you enable the volume secret isolator. That one is not enabled by default, so when you launch your agent, you say enable this isolator. And how it works, pretty similar as before, the volume secret isolator, we use the secret resolver, contact your secret store, receive the secret, and then he will mount that in the path that you give a temporary file system volume with the certificate that you wrote, that you wanted as a file there. Important thing, the file is loaded in a temporary file system. You can modify it if you want, but these changes won't be transmitted to the secret store, the secret store is a read-only operation, but you can do whatever you want with this file, it's just filing your file system. We have a third kind of secret we support, and with this we wanted to solve the problem of how do we download images from private Docker registries in a secure way. The ways we could use, like you could pass Docker config to the agent when you launch it, you could also put your credentials in your registry, in your task definition, but you really don't want to do any of those because they are readable. So we decided, okay, let's put just the Docker configuration in our secret store and let our secret API fetch it for us. So this, of course, causes some constraints in how the secret has to be formatted or which type it has to be. So it needs to be a Docker config filed, I think that's pretty obvious, but it needs to be formatted as a JSON. It needs to be in UTF-8, that's what we expect it to be, and, of course, it needs to contain the credentials to the registry. I think that's pretty obvious. So this basically works. Again, when you're launching your executor, you will pass the message image, then the Docker, and in the Docker the important one is optional secret config. If you give this optional secret config, he will use the reference in the secret to retrieve your image, like to contact your secret store. How does it work? Pretty similar as everything before, except now you won't go to an isolator but the provisioner. So he will get the config file, the keys, and then it will contact your Docker registry based on the configuration that he just downloaded from the secret. Once he contacts the Docker registry, he will get the container image and then launch your container, which I think is pretty cool. Constraints we have. The secret API is only available for the Mesos containerizer. Sorry about that. If you're a Mesos Docker follower, and the image pool secret is only available for Docker images that are running with the Mesos containerizer. So that's pretty much for the Mesos. And I will talk about the elliptic cryptography support. The problem with this was that was an oversight because you have to implement an extra API when you instantiate all your SSL. So we corrected this oversight. Now we enable the ACD-HE, which is elliptical curve Diffie-Hellman, which is just how you do key exchanges in TLS. The important thing of this is that you can have a key element protection with smaller keys. So you know the strongest security is given by symmetric encryption. So you have the best security with the smaller keys, and the most important problem there is you have to share your key. Then you have the public-private key support, which is the initial part of an TLS connection. So traditionally we use RSA with Diffie-Hellman. And not long ago, like 10 years ago, we started working with elliptic curves. So the cool thing is if you see a one-kilobyte RSA DH key, it's as secure as a 160-bit elliptic curve key. And if you go bigger, you say like a 15-kilobytes RSA Diffie-Hellman key is as secure as a 521-bit elliptic curve key. This not only reduces the size of the messages you're passing when you are negotiating a connection, it also reduces the amount of cryptographic operations your CPU will be doing. So in that sense, elliptic curve is really, really an interesting topic. They also use it to solve the Fermat-Las theorem. So elliptic curves are very interesting if you're into math. So I will recommend you guys to think about using it. In order to use it, you need to use the LiProsess SSL key file. And for this, you need a special key, like the traditional RSA keys that we use with the SSH key gen to not work. You actually use SSH key gen, but you use different set of parameters, and then you get an EC key. And then you need to change the LiProsess SSL ciphers because the default ones don't have a CDHE enabled. So we support all these ciphers that you see right behind me. The important thing, very important, is the key and the cipher most match. If you want easy keys, you need to put at least once of these ciphers. Otherwise, you will get a bunch of errors that the connection could be initiated because the handshake couldn't be done. So very important. And I think that's pretty much what we have for you in new features. We definitely are working hard. So on making messes as secure as possible. So I will ask you guys, test if you find a bug, notify us. We really take this seriously, and we're trying to make messes more secure every day. So there you go. All right. And on the topic of making it more and more secure, I'll talk briefly about some multi-tenancy concerns. So if you're a single user using a mesos cluster, you have no problem seeing everything. But if you're a 100,000 person organization spread out across different departments and teams and projects, you may have legal requirements that your home mortgage department can't access your stock investment department. You may have all sorts of different requirements that sales shouldn't see what engineering is working on and the upcoming releases. And so you often end up with kind of a hierarchical organization, maybe split between different environments, different departments, different teams, different projects. And similarly, you're going to want to partition your resources and your tasks in a corresponding manner. So in the past year, we've introduced hierarchical roles to mesos. You may remember roles as a mechanism for partitioning resources in a rather flat manner where each framework can register as a single role and you can reserve resources or set quota for a particular role. And that allowed you kind of this flat namespace where each framework had one role. You could theoretically have multiple frameworks share a role, but they have to work very closely with each other to not step on each other's toes. But now you can have a hierarchical role so that match the same kind of hierarchy so that you could have a framework in the dev sales app project. You could have a framework in the test sales app project or any of these. And you can do quota at every level of this hierarchy. So you can have top level administrators say, okay, I want sales to have this many resources and engineering to have this many resources. And then within that, they don't have to care how it's distributed. Then the head of engineering says, okay, well, I've got the front-end team, the back-end team, the interns and R&D. You can split things up there. And then within the front-end team, maybe you've got different projects and you want to split up the resources there as well. So you can set quota at every level here. And we do have validation that quota lower in the hierarchy does not exceed the quota of its parent. So you're actually taking 100% and distributing it across all of the nodes in the hierarchy. We also have added reservation refinement. So a top level administrator could say, these resources on these particular nodes are reserved for the engineering department. And I don't care what they do with them. But they're theirs now. Sales is never going to get offered them. They're dedicated to engineering. And then an operator or even a framework within engineering could take that and further refine it and say that, okay, well, now that I know that I've got maybe this disk resource that's available for engineering, I'm going to take it and use it for this particular application. And once I've refined the reservation down to my node in the hierarchy, if my task dies, I know those resources are going to be offered back to me and not just to anybody in engineering. And so this gives you flexibility to partition your resources between the different frameworks and teams and projects. However, you've organized your role hierarchy. And it's also important to note that a single framework can actually register for multiple roles and multiple roles anywhere in the hierarchy. So you could imagine a framework that for, you know, maybe it organizes its own applications within different folders and you could have those folders map to hierarchical roles. And in that way, you can manage the hierarchy within your application and all the projects and tasks running within it and map that onto the role partitioning of your resources. A second way you can use this namespacing is authorization namespacing to control which users can access which tasks. You know, if you have Alice that works in sales and Bob in engineering, it's a pretty common setup that you would want to make sure that Alice can't, you know, see Bob's tasks and frameworks and Alice can't even modify them or, you know, start tasks on those resources. So you can do authorization namespacing based on these roles as well. And we're going to, you know, we remove the with role and with principle on a lot of the authorization actions so that we can move closer and closer to doing authorization namespacing. And you can similarly do secret namespacing based on the same kinds of hierarchical roles. You know, you have a task that's running in a particular hierarchical role and you have a secret that's namespaced a certain way so you can say that only these tasks can access these secrets or only these users can access these secrets. You can tie that into permission management. When you're setting up ACLs, the namespace is going to be a big part of what permission you're managing. So not only this user or this framework can launch tasks, but this framework can launch tasks only within this particular namespace. You can do chargeback accounting to know, you know, which resources were used by which projects, departments, teams, and even tie in naming and discoverability. You know, if you have two different teams that both want to run something called Spark, how do you make sure that they are uniquely namespaced? Namespaces. And so that's it for what we've got to talk about today. We are going to open it up to Q&A. There were a couple of security talks already today and a couple more relevant ones coming on later in this afternoon as well as the town halls, which I encourage you all to visit. But yeah, we'll open it up for any questions you may have. I know that this is a complex topic and I'm sure you have some deep questions. Yeah, I'm a bit skeptical about the way the secrets are handled because in the secret resolver, there's no context information regarding a secret. There's just a secret itself. And in your previous slide, you talked about namespacing secrets. My question is if I understand correctly, the secret resolver is running as a normal process. It has no context information. So how can the namespace be performed? It's not possible in my opinion. I don't know. The thing is, as it is right now, the secret resolver is just an interface and it just takes a name. So pretty much all this namespacing and the stuff is left for the person who implement the interface, which we don't do. We just design the interface and it's up to mesos users to implement the interface and they configure it as you guys see fit. My issue is if I understand correctly, the secret resolver is stateless. So it has no context information. So it cannot say or it means that any task run using the secret resolver can access the secret from anybody. My point is if we want really to split secrets, at least the secret resolver has to have at least the user, the task is running on and so on. I believe the secret resolver also has access to the task metadata. So you know the task ID and any labels associated with it, so you can tag that on to the task and then you also have minimal metadata about the secret itself. The name could be a hierarchical path. So you can have namespacing on the secret itself as well as metadata about the task's namespace and you can match that only tasks in namespace foo slash bar can access secrets in namespace foo slash bar. So you can tie them together that way but there's certainly room for improvement. Thank you. We'll be outside at the booth if you have further questions at the Mesosphere booth.