 That better? There we go. So I work on AppArmor for Canonical. Just going to go through what happened this year. Last year, I said nothing new until we upstreamed the patch sets, because there's been a long development cycle on this updated AppArmor 3 stuff. And unfortunately, it didn't happen. There actually hasn't been a lot done, besides working on those patch sets and revising them. But we'll cover what was done besides those for a minute. We hit a lot of documentation updates for some reason, users like that. Christian's been doing a lot of work on user space util cleanups and making them better. He's the SUSE maintainer for AppArmor. We have sped up our compile again a little bit by going and finally getting some parallel compile going. We're getting not bad compile times out of it now, certainly better than it was in the past. We've started moving the user space to get, unfortunately, most of the developers at this time don't know get. And so we're kind of doing it slow. We moved the policy over to get first, get a feel for the BZR conversion, because we used to be in BZR. What we lose, what we don't lose, what we need to do. And give the developers some time to get used to get before we make the full transition. We did some prototyping of a group change profile set for all threads in a process. So right now, when you do set-con-chain profile, whatever you want to call it, that's limited to the current task or TID, whatever. So you do it on a single-threaded process. Unfortunately, we've run into situations where people want to do this on a thread group. The specific case that led to this one was due to the launcher, a launcher that the phone people wanted to use. What they do is the phone, most of the stuff uses QML for the Ubuntu phone. And QML maps tons of libraries and does all kinds of resource set-up. It's really slow. So they wanted to do a sort of a zygote process type thing like Android has done in the past. And we put a whole bunch of restrictions on them so they wouldn't run into the memory mapping problems and everything else. But anyways, what they ran into was they couldn't do it because they have multiple threads. And when you try to set the context, it's a mess, right? So we prototyped out some stuff for this. We're not using it right now. What it does is it basically does you do one call to set the task before it launches any threads that it wants to be doing this group tracking that it's going to change. It sets up a special cred for that. And all its threads will inherit that. And then when you do that call, we use our underlying capability to change the, update the profile on tasks that are live. There was some work on updating the user-based permission checking and caching because there's a whole bunch of user-based game and stuff that's doing that stuff now. We released the 2.10 and user space and we're into the 2.11 betas. Most of what's happened over the last year, though, is bug fixing and revisions of the code, the development code base. So of what we were talking about last year, I briefly mentioned G settings. There's actually not a ton of work in this. This isn't anything fabulous or very different from what has been done in the past for the X server or D bus. It's just a user space daemon that's trusted in its domain that's been integrated into policy. Allison Lordy and some other people in GNOME have been working on a daemon. So what happens is, I don't know if you know how G settings decomps works. They go in and they map the files that they want to access into their address space of the process. And then they have access to all the settings and it's nice and fast and what is very insecure. And unfortunately, we're trying to confine some desktop stuff with Snappy and with the phone and whatever. And so how we did this is just to prove that right. You set up a daemon, update the library or use LD preload to get a new library, whatever, or new symbols anyways. And so instead of the regular G settings, what happens, it checks, finds out that, hey, I'm confined now. So I'm going to run out to the daemon and let it manage and do the permission checking and it'll send stuff back to me. Of course, they want it to be really fast. So they need to cache a whole bunch of stuff. So instead of mem mapping the whole file, they're caching everything in the application and start up basically. So it does a whole bunch of communication to D-Bus. And I think they're over D-Bus right now. Or maybe they've moved to a private socket, I can't remember. It was on D-Bus at one point. And so none of this is app armor specific except for the policy integration itself. It could be leveraged with any other LSM or any other project when this actually lands. But that was some of the work that was done. So the big app armor development cycle has been all around the labeling updates, the policy namespaces, the stacking. So it's yet another namespace, right? We've heard tons of different namespaces. This one is specifically for app armor policy. We have no real desire necessarily to see an LSM namespace. I don't know that it makes sense for other LSMs. And we're fine just keeping it where it is within our system. At least for now, app armor namespaces are hierarchical. Each namespace contains you can have a separate set of policy. It's a way of grouping policy. You have they control what can be loaded. So if this task is in that namespace, then it might be able to load to it. But it can't load to other namespaces necessarily, especially up the tree. So anything that would be in, like, say, namespace 3 is not going to have any access to the system namespace. They also control visibility so you can virtualize and hide things. So if you're in namespace 3, it can see its children, namespace 4 and 5, but it can't see the system. It can't see namespace 2. It can't see namespace 1. Pretty simple. Most interfaces have been virtualized to the namespaces. We do have a few leaks around audit, obviously. And the policy directory right now hasn't been virtualized. There needs to be some updates we need to do to the LSM file system, Security FS, to be able to do that. So we will land those after the fact. We have uses besides containers for these user defined policy. So one of the goals is that users will be able to get opened up eventually. It won't be at first. So that users can actually load their own policy for their own applications if the system decides that they're going to allow them to do that. And Snappy wants to use it for logical grouping so it can manage its policy off in its own little space so we don't have to worry about it. Pardon me? Yes? It is fully independent of other namespaces. You can have namespaces, an app armor namespace, and stick one set of tasks in the app armor namespace. And it shares the other namespaces with the rest of the system. So while it's fully independent right now, there is discussions about whether we should require user namespaces to always get a new app armor namespace. We don't do that right now partly because the hooks aren't there for it. So for our initial implementation right now, we'll talk about that in a little bit with the LXD stuff. It's all manual right now. But do you have something, Mimi? We could have a generic one if it makes sense. It does not have to be specific to app armor. I mean, obviously app armor will leverage it and it certainly has its own set of features towards it. But we're more than willing to work towards a generic one as well. So that's interesting. I hadn't heard of that. I know Casey with SMAC and SE Linux aren't too thrilled with namespacing their policy. I'm putting words in your mouth. You're quiet on the subject, yes. So yeah, anything else in there? So stacking is the other really big part with this labeling stuff. We've iterated on this a lot and fixed a lot of bugs. And it's pretty stable. Ubuntu's been running it for a couple of years now through the iterations and making our users suffer through some of this. Basically, tasks can be confined by more than one profile. If you think about it, it's the intersection of the different dynamic intersection of these to find the actual type or the enforcement or confinement. So there's the concept with this because the stacking can cross namespaces. So you can have a task with just a single profile on it. But you could actually have a task with, and this is the LXD case, you have a profile for the system level of the container itself around the whole container. And then all the tasks in the container have their own profiles being applied. So we're crossing a namespace boundary there. And so there's the concept for each task, what its current namespace is. And this is part of how we determine where tasks can load policy. The child can't see the parent namespace. It just thinks it's the current namespace, whatever, or the root. And that's where they can manipulate if they have sufficient permissions. We use CAPMAC admin in the container as well or in the namespace. So if you're in a user namespace, you have to have the proper permissions. We spend a lot of time fuzzing our interfaces and hardening them against possible attacks. I'm not saying everything is fixed. We hope it is. When you do the permission checks, inter-task permission checks anyways, it's between the labeling within a namespace. So when you have a stack with multiple namespaces on it, you're only checking between the tasks labelings within that namespace set. You treat them almost completely independently at the policy level. You don't have to worry about it. The system takes care of it. So for LXD, LXC, we've been working to integrate this. And it's been very good and helpful to flush out the bugs, flush them out, whatever. So we spent some time poking at the patches that were floated last fall for smack namespaces. And they're good. We've played with them. But they weren't completely sufficient for what we need or would like to have for our policy. So when we get around to it, we will have to post out some more patches. Things like Unshare, we don't have any control over Unshare right now. And there is a pivot root cook right now for permissions. And again, what we would like is we'd like a secondary hook that allows us to do some updating of creds seamlessly. You can't really do that in the permission hooks, because obviously other LSMs or whatever might be denying things in the permission hook. But we're not going to do any of those until we get the base code up. The big push is just get this base code up. So an example of what we're doing with LXD. LXD, like I said, has to do a manual setup. You have your system. There's tasks on the system that are outside of the container. It sets up a container. And the container has an LXD profile around it. And then within the container, the container is loading its own policy. And then the tasks within that container get that policy. So they're a stack across the name space, or app armor name space, of the two profiles that's a composition of them. We have some really tight restrictions on how this can be used right now, where there has to be a one-to-one mapping between the user name space and the app armor name space. They have to be at the exact same level, which removes alternate use cases. You can't use user policy-type stuff with them right now or anything like that. Or even LXD, a snappy grouping, which hasn't landed yet either. And they obviously need to cap Mac admin within the user name space as well to manage their own policy within it. It's interesting kicking around. I broke my demo, so I'm not going to bother with that. So what's the backlog? Really, it's upstreaming. I mean, this is it. Hopefully, within the next month, we'll put an RFC up so people can look at this. And we're shooting for 4.10, 4.11 time frame, depending on feedback. And it's not going to hit 4.9. We do have other backlog. This is a list that's way too long to even fit. We could go on for pages and pages. There's always so much to do. I don't know. Lots of virtualization stuff for the namespace stacking and integration and tools cleanup and whatever. Any other questions? All right, thank you. Thank you.