 Thanks, everybody, and welcome to the folks out in internet land. Glad to have you with us in addition to the folks in face-to-face. So my name is not Ron, I'm actually Anne, so just letting you know that we switched here. And I'm going to kick us off and provide a little bit of background on what we're planning to talk about today. So in essence what we're looking at are a set of profile practices basically for how you should manage your authentication from end to end. These will do a little quick introduction on these profiles, these practices. And then we'll hear a case study from Chicago about what they learn in their implementation of these practices and how they're using them. And then we'll give you a little bit of background about how you can learn more. There's a whole community out there working on this. And so if you read through them, you have questions, you're looking at strategies for how to better your passwords. This is the place to be. So brief intro. The basic premise behind these practices comes out of the identity federation context. So if you think about it, if you're familiar with an in common identity federation, there's a split between who does the authentication and the identity management portion and who does the actual service providing and the authorization. So the campus traditionally does the identity providing, issues the credentials, manages the authentication, but it's the service provider that may be a government organization like NIH or it might be people admin, for instance, that actually has an is incurring the risk of that authentication transaction that you're managing on your campus. So the whole profile nature came out of federal government and HSPD 12. I won't go into all of that, but you may be familiar with some of the work they've done at NIST. In essence, what they're looking at and what they've come up with is a set of practices in 863. I'll tell you a little bit about that later. So the campus then issues these practices or issues these credentials and the individual then accesses the service provider. The service provider needs to trust the campuses that they've done due diligence. They need to understand that the risks of their service that they're offering to the campus has been met. But as you can kind of imagine, you know, if I have relatively low risk service, so you're just accessing, for instance, a minimal data set that has no PII in it and is freely a shareable to everyone, you may not as a service provider want to require the campus to have a really strict level of security on the authentication. The actual implementation and the cost that the campus would have to incur to support your security needs really don't match, right? And so you would have very few identity providers actually accessing your service because it's not cost effective for them to do so. So there's a balance between balancing the risk that a service provider has and the cost and the effort the identity provider has to better their practices to address the risk. So I'm going to tell you a little bit about what's in the in common assurance profiles first, and then we'll go into kind of some of the background and why I think this might be a good thing for you to take a look at. These eight sections really make up the identity assurance profiles. We also have another document that's a framework that really describes the trust and how this all fits together, but the profiles are really where the meat is. So as you can see, the basis of trust in identity assurance and federated transaction has a business and policy aspect to it. We have to make sure that the organization first is who they say they are, right, before we even get to the individual, right? We have to make sure that the registration and identity proofing of the individual is set. We have to make sure that the technology that you use to do your authentication, the actual token is a good one, the approach you're using, and then of course how you distribute it to the individual is also solid. The actual authentication process, the transaction has to be well thought out, and then a key piece is how do you make sure that the individuals are kept separate in your identity management system and manage that relationship, and then the information that you convey about that person to the service provider has to be solid as well, and then of course your technical environment, how you protect your network and your systems physically, for instance, is also a component. So as you can see, the authentication process goes from inception, creation of the account to the actual decommissioning of that credential. So it goes from end to end. I guess you could say in a way the authentication lifecycle. So you might say, well, this is great. These profiles are actually password-based. Why should I really care about these profiles? So don't we want to use multi-factor? And I would say, yeah, I think multi-factor is absolutely the way most folks are going. It makes a ton of sense, but you're going to have passwords around for a while. You won't have multi-factor probably rolled out to every service and for every individual, possibly for a while. And you need to kind of look at the services and the data. You still are protecting with passwords. There may be some really good reasons why you want to do some best practices on how you handle authentication. Also, not every risk that you're dealing with with respect to passwords is phishing, right, which is what multi-factor tends to address. There may be issues like fraud, like issuing the credential to the wrong person, for instance, because you didn't identity-proof. So there are other risks that one incurs if you're not looking at the full gamut of your authentication service. So what are your choices there? Your choices are you can use stronger credentials where you can, like multi-factor, and you can improve the passwords until you no longer need them. So I just want to give you a little bit of background that will hopefully make you feel all nice, warm and fuzzy about the incoming profiles and where they come from because you may not know what these are. And they really have a quite substantial provenance that is meant to both help you implement and then also ensure that you're doing the right thing. So you probably have heard of the 863 e-authentication guidelines. It was written for the federal government by NIST. This is complimented by OMB MO404, which is really a risk analysis methodology that GSA, that OMB put together. So the two for the federal government came out of the HSPD-12 directives that were come out of 9-11. And they were meant to, in essence, increase the security and verify the identity and increase the level security transactions for federal services federating with each other. So these are federal specs meant for federal agencies. So the U.S. government, though, came out and said, you know, we don't really want to be in the authentication business for things like, you know, consumer credentials. The IRS does not want to manage your authentication to their services. They really would like to federate. That is their long-term vision. And so they put together an organization under the CIO console called the Identity Credentialing and Access Management Subcommittee, FICAM. So the F is federal, something that we non-federal people put into those acronyms, a federal ICANN, federal grouping. So you think of what they're doing is an API for trust. They're taking 863 and saying, we understand this is written for the federal government, right? But we want to put it out there and wrap it with some actual trust framework requirements like verifying business, right? 863 has nothing in there about business and business requirements. It doesn't have much in there about technical and facility controls. It doesn't have a lot about, in fact, nothing about policy and how often do you renew and verify, for instance, that what you're doing to support their requirements is still true, is still valid. So FICAM put some wrapper around this and then they put it out there, the community, and they said, community, we would like you to apply as trust framework providers and tell us how, develop your spec that is comparable to 863 and we will assess whether it's comparable or not. So in common, of course, you know, acting on behalf of higher education, because that's our gig, right? We know that higher education has very strong ties with federal government. We already federate with NIH and NSF and over 40 services and they're a really big, both obviously grant partner as well as compliance partner for us. So we know that the federal government is really key service provider for us. So we said, yeah, let's do that. Let's write one of these specs. So we did. We wrote the identity assessment, identity assurance assessment framework, which describes our whole framework in terms of how you get approved, defines all the terminology, things like that. And then we wrote a document called the identity assurance profiles, which is equivalent to NIST 863 level 1 and level 2. I should say comparable to, right? So we then went through long story short, a full audit of our profiles against the FICAM program and came out the other side with a nice stamp of approval. So these practices really are written for higher ed because they are not 863. Those of you who are well versed in that will read it and understand that immediately. They are written with intent, not specific requirements, because we didn't want to go through and change and really chase the technology development. We wanted to leave it open so that campuses could come to us and say, I really like these requirements, but we're going to do it this way. And here's the risk assessment that we did and here's why we should be able to do it that way, right? That's called an alternative means. And then we have an assurance advisory committee that's made up of service providers, relying parties and auditors that reviews those. Once those alternative means are accepted, they're added to the specs. So the spec then grows and morphs depending on what the community is interested in doing and what works for you. Now, it has to obviously comply with the spec, but it enables the spec to be a less rigid and more flexible with respect to implementation, which is what we're trying to get at. So I guess I would end my section before I turn it over. I'm the potatoes. He's the meat before I turn it over to Ron. And ask you a question. When you're doing your authentication, you're designing your processes, you're looking at your end to end, what processes and what practices are you using for guidance? If you have a breach and your president comes to you and says, how do I know we've been doing due diligence? You can hold up these practices. They're approved by the federal government. They're written for higher education, by higher education and say, these are the practices we're using. Good thought. So my colleague, Ron Thielen from University of Chicago. Thank you, Ann. Apparently, as you cause, could only afford one lavalier mic, so I'm going to be tethered to the podium. If I wander off and you can't hear me, you can feel free to throw something at me. Just be aware if I see you dozing off, I might toss it back at you. So while Ann started out at the 50,000 foot level, I'm going to dive down a little bit deeper into actual technical challenges that we faced at the University of Chicago. Hopefully, I won't nose dive into the ground, but you can help me there. You see me going way off the rails. You could also move closer if you feel like it. There are plenty of seats up here. We could actually turn this into a round table if you wanted to, if it weren't for the Internet audience. So if you were at the 11 o'clock general session this morning, you saw a slide that said compliance is not security, and I want to reinforce that a little bit. Sometimes you implement a control because you want to achieve compliance, and sometimes you do it because you're trying to improve security. Hopefully they're both aimed at the same target, but that's not always the case. But what I have found in working on silver, in common silver compliance at the University for the past five going on six years now, is that trying to comply with the standard has led us to see several different ways in which we could improve our security, and I'm going to talk about some of the ones we're working on right now. If you go to in common and you look at the identity assurance assessment framework, which is a lot of words which describes identity assurance and the process and what it's aimed at, it doesn't really have the profiles itself. There's a separate document for that. But one of the things that's really useful in the assessment framework is appendix C defined terms, because you have to be familiar with what's the difference between an identity provider and a verifier, and the role that different pieces of technology play in your own environment. So some of the things I'm going to be referring to are the identity provider, which in our case is shibboleth. It's the thing that actually makes SAML assertions. The credential stores any place where you're storing the authentication secrets that the identity provider is relying on. And the verifier is the thing which actually verifies that the authentication secrets are correct. So in our case that's LDAP. In the latest version of the identity assurance framework and profiles, they introduced a couple of new terms, protected channels and approved algorithms. And these have been the bane of my life for the last two years because the words used to be industry standard, which was a lot easier. Industry standard could mean a lot of things. Protected channels refers to approved algorithms. Those approved algorithms has a very specific meaning. And there's a little bit of leeway in that in common could approve an alternative means, as Anne alluded to, alternative means, which satisfies the approved algorithm requirements, but they haven't done that yet. So you really have to fall back on what NIST says is an approved algorithm. And this is talking about hashing and encryption algorithms. Why do these terms causes problems? Well, one is that if you look at the NIST documentation, you'll see that, for example, SSL version three is no longer adequate. You have to move to TLS. So we have to change our services to use TLS. So we need to use approved algorithms and protected channels for things like the web page where users manage their accounts. The web page, the identity management staff uses to manage the services. CIS admins, SSHing into the servers and the identity management systems have to be using appropriate algorithms to protect their SSH sessions. The Java libraries that we use to support some web services in our environment needed to be upgraded. Shibboleth and Grupper required some changes. And mostly it was just a lot of grunt work to do. There's one aspect to this which I'll talk about towards the end that we're not going to exactly fix, but we're not exactly required to either, and I'll explain why. The biggest area where protected channels and approved algorithms has caused aesthetics is in active directory. So Internet2 and InCommon and before that the CIC has had a couple of different iterations of working groups looking at the issues in active directory and supporting the InCommon Silver Standard and what the implications are because we pretty much all have active directory in our environments and there are issues. We just finished a reiteration of the active directory Silver Cookbook. I worked on both the first version and the second version. It's taken us a couple years. It took us a full year to get this new version out because there was a lot of parsing of the spec that we had to do and finding information can be difficult. So even though this says these are the approved algorithms that I'll use for encryption, finding out from Microsoft what algorithms they actually use, the different parts of their main controller and active directory architectures was very difficult. Even though we had Microsoft on the calls with us sometimes, it still was difficult prying out the exact information about well, where is our seat four used versus shell one versus AES versus whatever. So when you go to look at the AD Silver Cookbook and I encourage you to take your time digesting it and understand how it fits to your environment. So this is the slightly modified identity management functional model. You can find the original in the assurance framework document, but we modified it just a little bit to include the box in the lower right-hand corner that says active directory verifier. So let me explain what that means. Active directory in our environment is not the verifier that Shibboleth uses. Shibboleth is the IDP, the identity provider there. That's a key piece. It uses LDAP as its verifier. The issue for us is active directory is still a verifier and active directory has the same authentication secrets that the IDP relies on because everybody in our environment has one network ID and password that they use regardless of whether it's LDAP or active directory based authentication. So that means you have to understand in your environment the role that active directory plays. Is it a verifier or is it the verifier for the IDP because different requirements in the profile may apply differently depending upon how you answer that question. Some universities actually have separate credentials. So your Shibboleth user ID and password may be different from your active directory identity. In that case, a lot of what I'm going to say, what I'm going to talk about may be off the table for you. A lot of us though have moved in the direction of single user ID and password for everything. So if we look at active directory, these are the challenges that we faced. Our LM compatibility level was set to 4. What that means is that even though according to the documentation NTLM v2 is preferred, NTLM v1 is still accepted. And while I say according to the documentation, that's because you would think that if a client could do both and one was preferred, that's the one it would use. It turns out that's not always the case. Sometimes clients get lazy and just use the first one that works. So even if the client uses NTLM or supports NTLM v2, it might wind up using NTLM v1. I'll explain why that's bad in a minute. I found looking that NetFlow is going to the domain controllers, I was surprised to see that there was a lot of traffic that was not going over SSL. And when I looked at it further, I found that it's not signed. So it's not encrypted. And so there are LDAP binds. Even though we have an LDAP service, Active Directory, of course, presents the domain controllers present an LDAP service as well. And some clients are perfectly happy to send their credentials to Active Directory as an LDAP service completely unencrypted. So you may have user IDs and passwords flowing over the wire in clear text in that case. If you look at the cookbook, and I'll explain the details of why this is later, one of the recommendations the cookbook makes is that you may want to look at something like BitLocker for your domain controllers because there is a requirement that you encrypt the password store, the credential store. In our environment, we can't use BitLocker. It's not supported under VMware, and most of our domain controllers are virtualized. So those are our current AD challenges in dealing with Incom and Silver. And are they compliance or security issues? Well, what I've been telling the users mostly to try and convince them that we need to do the work to solve these problems is that these are not compliance issues. These are security issues. They were brought to light by our compliance efforts, but the fact that you have user IDs and passwords flowing over the wire in clear text, that's certainly a compliance issue, but it's more of a security issue, and we would want to solve it regardless of whether we were trying to comply with Incom and Silver. So our response so far has been a combination of technical controls and one or more alternative means statements. So again, that phrase alternative means, and if you're more, say, familiar with the audit world, if you're kind of an ISAC, a kind of person, you might think of an alternative means statement as a compensating control. So I'm not going to go through all the profile requirements because we only have an hour, but I'm going to look at error two in particular that affected us. So one that the AD Silver cookbook working groups have spent a lot of time working on is 4236, which says that you have to protect the authentication secrets. Now there's a 42362, which is specific about authentication secrets used by the IDP, and in our case, since the AD credentials are the same as your LDAP credentials, even though AD isn't the IDP, they're the same authentication secrets. If those credentials are sent between services, the services have to use protected channels, and again, that implies they have to use the NIST FIPS-approved encryption algorithms for that with an exception, which I'll propose later. And the other one which hits us is that if non-IDP apps use them, then you have policies and procedures in place to minimize the risk of their exposure. So where this comes into play, and I'll talk about this, is if, for example, you've got a web page that uses form-based authentication, that's a non-IDP app, probably, it's maybe just some random student-government-run application on your campus that decided they wanted to use LDAP to authenticate the users. Then in that case, they don't need to use protected channels, but you have to have some sort of a policy or practice in place to mitigate the risk of them doing things wrong and shooting themselves in the foot and exposing your passwords. So how do these apply specifically to the active directory NTLMv1 and unsigned LDAP bind issues that face us? Well, as I said, because we still accept NTLMv1, but NTLMv1 isn't actually passing the password. NTLMv1 is actually passing an NT hash that's encrypted with the password. So who cares? Well, it turns out that for $34, you can run that NTLM hash through something like Cloud Cracker and then run the result of that through Cloud Cracker, and then, so after you're two runs, you get the password back. So even though NTLMv1 credentials are not your password, they're just a step away from your password. So given how weak the credentials are, again, it's a security issue, not a compliance issue. So our first thought was, well, let's turn our shields up, let's go to LM compatibility level 5 and just stop accepting NTLMv1. Turns out that it may not be possible, or at least as easily possible as we may have thought, for one very particular reason, and that's radius. So MSChapv2 is the phrase used in the literature is cryptographically equivalent to NTLMv1. Just say it's NTLMv1. And EAPMVSChapv2 v2 is fairly ubiquitous. If you use edgerome or support edgerome, you have to deal with these issues. EAPTLS supports or protects, actually creates a protected channel between the client and the radius server. So you're good there. Even though you're using NTLMv1, it's flowing over a protected channel. The problem is if your radius server is one of those that has implemented some of its authentication using Samba code, then between the radius server and the domain controller, your NTLMv1 or MSChapv2 credentials are flowing over the wire again. So this diagram from freeradius.org illustrates exactly our situation in that we're okay up until we get to the radius controller, and since we are using freeradius on our campus, along with a couple others for historic reasons, we do send NTLM to the domain controllers. So what are we going to do about that? Well, one solution is to get away from using EAPTLS, so instead of using the Chapv2 credentials, you're actually relying on certificates issued to clients, but that means you have to do client certificate management for all the devices on campus connecting to wireless, and that's not going to happen in my lifetime, or at least certainly not before I retire. Another option would be to move off of freeradius and move to something like Microsoft's NPAS on Windows, and then the credentials aren't flowing over the wire, and Microsoft has done some little tricks to make sure that if you go this route, you're not exposing your credentials in that way. Another option would be to actually create protected channels between all your radius servers and domain controllers, and this is the route that we think we're going on, so you could do something like create IP sect tunnels between all the radius controllers and the domain controllers, and that's what I thought we were going to do until the Windows administrators got cold feet because we've got a big cluster of radius servers, we've got a dozen or more domain controllers, and they just thought that it was getting way too complex. Another option, and this is my opinion, but I think if you created something like a private backnet between them, it wouldn't technically meet the definition of protected channel, but I suspect based on conversations I've had that if I submitted an alternative means statement in common saying, hey, we call this a protected channel because it's a non-routable VLAN, non-addressable in any way. It's basically a private backnet, and hey, if you really required me to, I could do it at layer one. That would be as good as a protected channel based on encryption. Another option we're looking at is to use radius proxies and actually putting radius proxies on the domain controllers themselves and then using something like radsec to create TLS tunnels between the radius serving the wireless infrastructure and the radius proxies sitting on the domain controllers. So we actually haven't picked the approach that we're going to take for this yet. I think it'll be one of the latter two. We're still assessing it. The other thing that you have to do then, though, is what I call monitor and mitigate. If you're going to leave NTLMV1 on, you may have to put in some other control to deal with the fact that your domain controllers are still accepting NTLMV1 credentials. So this is a case where not everything can be fixed with the technical control and there could be lots of reasons for that. So we're going to implement the monitor and mitigate strategy, which we've documented and submitted in common, and if you go to the Internet2 community pages for insurance, you'll see some statements up there describing monitor and mitigate. I'm going to go into a little bit of detail here about how that works. So for NTLMV1, a logon event is generated and that logon event, if it's an NTLM logon, has a field which tells you which version it is. I recommend that you filter these events because in our case 98% of them are for anonymous. It turns out anytime somebody goes to an IIS web server anonymously, even though they're not doing any authentication, it says, oops, I got an anonymous logon. And so it generates an event which is completely useless and meaningless. So you can save yourself a lot of traffic and storage cost if you just filter those out. Also, I found that all these radius authentications don't generate 4624 events. Apparently, they're not considered logons. They're merely authentication. So they weren't showing up in any of my logging. So what we did do is we created some PowerShell scripts that filter the events, write them to flat files on a file share. I'm probably going to be replacing the PowerShell scripts with an X-Log if you're familiar with that. I'm finding that to be a better tool for this sort of thing. But I'm not somebody who likes to dictate to people how they should do their job. So I said to the Windows admins, here's a problem. Tell me how you want to solve it. They came back with the PowerShell scripts. I said, okay, great, go do that. Now we're a year down the road with this. I'm going to go back to them and say, okay, let's look at this NX-Log thing because I think it's better for a lot of different reasons and I'm going to be doing some education around that. So it writes out an event and here's a sample. It tells you the domain controller, where it happened and the user and what domain they were authenticating to and what system they were on and what the address for that system is. So then we have a Perl script, which once a day goes through all those log files, creates some reports based on what it finds. If it finds a user showed up at one of those events and that user was a valid, in common, silver user, then we take away silver from that person and we create a help desk ticket saying to the help desk, you got to go help this person because they did something bad on the network. They did something that generated an NTLM V1 logon. Dad exposed their credentials in an unacceptable way, so we had to turn off silver. In the long run, I don't think this is scalable, but it's good enough for us for now. What happens is if they can't figure out how to help the user, the user then goes and gets silver re-enabled for themselves and then they just wind up losing it again a week later. So we have to actually fix the underlying causes. So as I say here, this might be good enough for compliance. I think it'll be good enough for our achieving silver, but it doesn't really solve the underlying security problem in that we still have NTLM V1 credentials flying over the network. We don't have a lot of them fortunately. I think in our case, we will eventually be able to get to a better technical control for this. But for now, the next step is go fix the problem at the source. So go talk to the user, find out what they did, we know what system they were using, find out what it is about that system that causes it to use NTLM V1, and then fix it to use something else. Second problem with credentials going over the network in the clear with Active Directory is LDAP binds in the clear. So by default, Active Directory is perfectly happy to provide LDAP services and take binds in the clear. There are some things that you can do to fix this. You can turn out LDAP signing, which it turns out also encrypts the payload. You can require LDAP to use SSL or TLS, so basically only use LDAP-S. Or you can do IPsec for everyone, which again is like putting certificates on all the mobile clients. It's not something we're going to do. Requiring LDAP-S is likely a non-starter because some schools have found that that adversely impacts your Windows clients. And hey, if you're going to configure your domain controllers so that your Windows clients are impacted, sort of what's the point. Active Directory is there for your Windows environment. However, if you turn on LDAP data signing, your Windows clients won't be affected. They're happy to deal with that. But it's likely to break some non-Windows clients. So Max and Samba may have issues. You then have to go mitigate those. So just like with the NTL MV1 events, if you're using Windows Server 2008 or later, you can turn on an event, 2889, which says very verbosely, the following client performed a SAS whole bind. I'm not going to read the whole thing. What we did again is we had the PowerShell scripts go through, filter those, reduce that verbose event to the facts that we want, and write it out to log file. And then the same Perl script that handles the NTL MV1 events handles these events as well in the same way. And again, this may be good for compliance, but doesn't solve the underlying security problem that you've got passwords going across the network in the clear. So somebody still has to go talk to the client and find out what was it that you did that caused this to happen. And we've found a few common scenarios, but there are some that we still haven't been able to resolve. So for example, Microsoft Office on the Macintosh by default loves to do these unsigned binds to Active Directory as an LDAP service, one of the stewing lookups for people. You can go in and you can change it to do it securely, but it's not the default. Other circumstances, we've had cases where people generated an unsigned LDAP bind. We went and talked to them. We tried to dig through the logs. We never were able to figure out what it was that caused the problem. In that case, at least in one infamous case in my office, we told the user, look, you don't need silver right now, actually. There's no use case for it in your case. So until we figure out what the problem is, just leave things as they are and we'll work on it. A week later, he went back, reasserted silver, and then lost it promptly and got quite upset because he did something we told him not to do. But I think we all deal with situations like that too. The other profile requirement I'm going to talk about much more briefly is the one that says that if you're storing authentication secrets, then they need to be secured. And there are three ways the requirement gives you to secure them. And again, in our case, remember, these authentication secrets are not the authentication secrets. LDAP, I mean, our IDP is using, or LDAP is using directly, but they are the same authentication secrets those services use because we have one user ID and one password for everything. So the NT password store needs to protect these authentication secrets either by storing them using Assault at Hash, encrypting them using an approved algorithm, or protecting them using something approved by NIST level of Assurance 3 or 4, which none of us is going to do. We tend not to be protecting the nuclear arsenal, so level of Assurance 4 is not where any of us are really aimed. The problem is Active Directory does none of these by default. It doesn't salt the hash, it just stores the hash. The algorithms that they use to do the encryption are not approved. And so your Active Directory store cannot meet this requirement if you want to do silver. The AD Silver Cookbook recommends that you use some other third party technology to encrypt the password store. And the example we provide in the Cookbook is BitLocker because it's free, it's there. And BitLocker will use an approved algorithm as it turns out. I think BitLocker will use AES as its algorithm for doing the encryption. The problem in my case is that BitLocker is not supported under VMware and some of our domain controllers are virtualized. Now there are other encryption solutions out there, but we haven't vetted any of them yet. So if we were to go down the route of getting a third party encryption product that was aimed at VMware vDiscs, there would be an adoption speed bump for us. So we're going to try a different approach. And that is we're going to create an alternative mean statement based on our risk analysis of what this whole requirement is trying to address. So for example, we're on the domain controllers themselves, we're whitelisting applications so that if there were a breach, it would be much more difficult for somebody to exfiltrate data by introducing tools which weren't native to the platform and already whitelisted. We have physical media management controls. So if the requirement to encrypt the store is there because you're afraid that a hard drive is going to walk out, well, for one thing these are virtual drives. So they're spread out all over the storage infrastructure. But we have physical media management controls that we've already been audited against for FISMA compliance. So I think that that circumstance would be covered in a federally accepted way. We do use multi-factor authentication for administrative access to all the window servers in our data center. And we use bastion hosts. We rely heavily on net flow analysis. So we've got alarms specifically set up looking at the traffic on the domain controllers. And basically we're doing anything that we can think of to mitigate against data exfiltration from the domain controllers. So we'll see if that gets accepted as an alternative mean statement. But I was an English major for a while, so creative writing is in my background and I think I can manage it. Now the last thing I'm going to talk about, this is not an active directory issue, but this goes back to the requirement that you have to protect the authentication secrets on the wire. So it's similar to the NTLM V1 issue. But it's the 42363 requirement that says that you have to have policies and procedures to minimize risk of exposing those credentials to non-IDP applications. So the IDP being the identity provider, the IDPO being the identity provisioning office. The things under our direct control all use protected channels and approved algorithms, but there's lots of things out on the campus that could be using the same variable. So we're locked down, but on our campus anybody can use LDAP as an authentication service. So that means that even though we encourage the use of shibboleth, there's still lots of forms-based web pages on campus where people are entering in their network ID and their password to get into some application that is not centrally administered. I mentioned student government a few minutes ago. As I said, anybody on campus can put up an application and use our authentication service. Now because we've locked down the LDAP service itself, the communications between their web server and our LDAP server are secure. On the other hand, we have no way of knowing in advance that they've done the right things on their web server like configuring it to use SSL on the page where users are putting in their user IDs and passwords. So what we've done is we've created some scripts that go through the LDAP logs and identify all the addresses where our binds are coming in from. We look at those addresses, try and figure out what kinds of services might be running, what kind of ports are open. Then we crawl over those addresses to look and see if we can find web pages where users are entering in user IDs and passwords. And if we find any that are not using SSL, we create a ticket and then somebody has a conversation with whoever's responsible for that machine to try and motivate them to fix the problem. Motivation meaning if you don't get it fixed within a reasonable time frame, we'll actually block you from the network. So that's it for the, we went from the 50,000 foot level down to about the two foot level. So there's quite a wide range there. Does anybody have any questions about what we presented or assurance in general? Any questions from the internet? Hi. I have a question and that is, how do you go about designating this person or that person as meeting a certain level of assurance as defined by in commons, bronze and silver statuses or profiles? You said that you were able to revoke an individual's silver status. Do you just award that by default and then remove it when somebody does something they're not supposed to or do people specifically apply for this or how is that all managed? So the 50,000 foot question is, not everybody necessarily needs to have a level of assurance or even the same level of assurance. It depends on what services they need to access and what the service provider is going to require. The way we've done that on our campus is, and I didn't go into the business process aspects of all this. I went right into the technical because I knew that there were a lot of technical people in the audience. When we first started down this road, my belief was that the business process aspects were going to be the most difficult. The questions around how do you actually verify that this person who says they're Ron Thielen is Ron Thielen? And there are requirements spelled out for ways to do that, but actually implementing the business processes to do that can be quite tricky. So in our case, if somebody feels that they need in common silver, and I'll talk about that in a second, then they actually have to show up in person to our identity and privileges office and prove that they are who they say they are. They also have to register a non-university electronic address. So you have to have an electronic address of record so that we can contact you in case there is some security-related event that affects your ability to have your silver assertions. Once you do that and then once you meet the password entropy requirements, so you have to have changed your password within the past year, that sort of thing, then we actually have a page where it shows you as you progress through meeting all those requirements, you get a little green check mark, and then when you get all the little green check marks at the end, you now could get silver assertions made on your behalf. How that's all handled under the cover is through a bunch of grouper groups, and so what we do when we take away silver from somebody is we just put them in a grouper group that says silver denied because they failed the AD audit. And then once their problem is resolved, they get taken out of that group, they then have to go change their password because it got exposed, and then they automatically get their silver back. Now in the case I talked about where I said to that user, you don't need silver, it was literally because they weren't using any services that required them to have silver. They were just doing it because they wanted to be good citizens and they thought that going through the silver process would be useful to them, and eventually it will be, but until I can figure out what the heck is Macintosh is doing to make him lose silver on a regular basis, there really isn't any point for him. Did you have anything you wanted to add? I was just going to say that a number of campuses have addressed this problem. I think Penn State in particular developed a central person registry, and it's actually available, it's an open source that tracks assurance as part of that. So they do it in their central person registry, and I think have seven or eight different fields under each entry that make sure that addresses all the specific requirements they have for that individual. Because not only does the organization get, as you know, certified, but then each individual, it's a transactional thing that's expressed using authentic context at SAML, a SAML 2 authentic context assertion to at that point in time is that individual bronze or silver. So what Ron is doing is exactly what needs to be done, and that is if a person does something that compromises their silver credential, it needs to be removed and then re-elevated again when the silver issues are addressed. Can you do it with the microphone please so we have the... Does that drop that person down to a lower level like to bronze or do they lose any kind of assertion at that point? It probably depends on the campus and what their policies are, but yeah, a lot of times it can be bronze. I think it probably depends on the breach and the issue, too. Right, we actually... We've been concentrating on silver because for us we think that's where the value is. We'll worry about bronze later. Once we get silver, we think doing bronze will be pretty easy for us. But in cases like that, we're going to have to think through whether they should be dropped down to bronze or have all their assertion capability removed. And it will depend on the nature of the issue that caused them to lose silver. So if you go through the profile, you'll see some things are highlighted as being bronze requirements, some things are highlighted as being silver requirements. So if the thing that they do highlights a silver requirement, but not a bronze requirement, then theoretically we could just drop them down to bronze. But that will require a lot more logic on our end, I think. So you mentioned the thought of having two authentication stores to isolate these primarily Microsoft issues, two different authentications, one for silver and then the other for the NTLM V1 requirement. Did you just rule that out because of convenience? Was there just too large of a community that would be juggling the two credentials? I'd say there were two things. The latter is the primary in that we just decided that in our case the community is too large for us to go to the extent of setting up a whole new credential system for them just to deal with this issue. And the other one is that we have a senior director who really believes strongly that this should be possible and that if we can do this and show people best practices and how to accomplish this, it'll be good for the entire community. And so it's sort of my mission statement to go forth and show that this can actually be done. If we can solve the hard problem, then you don't have to. That's right. And I guess in keeping with that, I wanted to point you at the questions and answers slides here that goes over basically the community that's out there to help you implement these. And I think they're valuable, even if you're not interested in common assurance and getting certified using them to inform your authentication service. As you can see from Ron's in-depth discussion, it really helps you shine a light on those cobwebby little corners that you don't normally look in. And you kind of assume that it's all working. And then when you kind of do a deep dive, it helps you do that. So I think even if you're not interested in any common assurance, it's a helpful thing. And as I mentioned, it helps with the due diligence. But there is a group out there. We have an entire Wiki devoted to helping folks deploy this. The AD Assurance Cookbook, we did a webinar actually today at 11 o'clock, and that will be put up there on that Wiki also talking about what the AD Assurance Cookbook is about, what are the major components of it, and so forth. And you can take a look at that. We also have a case study from Virginia Tech. They are bronze and silver certified, and they put all their information out there. In terms of the NTL MV1, we had a chat with Brian Arquils, and he's going to be doing a presentation on that, getting rid of NTL MV1 at University of Washington and his adventures in doing that. So he's going to be sharing that also with the assurance community. So this is basic security stuff, folks. So it would be great to have you join us. And thank you very much for coming. So we've been given the hook. If you've got any other questions, here's our contact information. Feel free to shoot an email to us. Thanks for coming.