 My name is Richard Evans, Tony and I met just yesterday, we realized that we had an awful lot in common in terms of what our interests were for being at this conference. And so when Ginger approached us and said that she had some slot to fill, we leapt at the opportunity to talk about a couple of things that are close to our hearts. So we're going to do a little bit of presentation jazz here, and we don't know how the timing is going to work out, so I'll drive and Tony will help keep me honest. So what is the time? Are we half an hour? Okay. Okay, so what we'd like to talk to you about today is security, specifically enterprise semantic media wiki, and the internet of things, which is a term that I haven't heard thrown around here yet, and specifically we would like to talk about the necessity of security and how that relates to overall system modeling. This is basically the outline, again we put these charts together a little bit earlier today, so we're going to try to run through this fast and get to the end where we can open up for what we hope is the good questions and conversation. What I really wanted to do was start out by showing this chart that I shamelessly stole from the World Wide Web Consortium, you can find this easily on the web, this is created by the folks that are currently setting all of the standards, and you can see in, if you can read the small green, in the PC era, they're dating that at around 1977, and in the top right where you see Web 4.0 it's calling it out at present day. There are two axes on this chart, and the horizontal x-axis is the connections between people, which you can think of, the Brian's talk just prior to ours would be a good way to relate that axis, the importance of information being relatable to authors and consumers and developers, and then the vertical axis is the connections between information, which you can easily see that in large organizations information has to flow from policy creators, subject matter experts and so forth. And so then the graph itself is a way that you can locate technologies as they emerged, and so as you can see, you know, the Wikis, I don't think my, I was hoping to have a pointer here, hold on, let me see if I can, no, I think I just got my laser pointer, yeah. So you can see the Wikis kind of land here, and then you have semantic search, and ultimately like the semantic, the semantic web, and where is all of this going? So the important point that I want to draw attention to is this is the future, and so for a lot of us in our organizations, and we're sort of suffering, you know, a lot of what we live with day to day is still spreadsheets and email and file servers. At an enterprise level, what we're trying to advocate is that corporations, companies can move up this line, and what does it really get to? As you move further up this graph, there's an increasing need for operating in larger environments with security needs at multiple levels, and so when you're working way down low on the chart, you can kind of handle security from a sort of simple file system, you know, conventions, but as you go further up to the Web 4.0, Web 3.0, we really need to have these multi-layered security mechanisms that allow data to move through the organization properly. Just real briefly what I wanted to do, this was very helpful to me. You know, the original Web was when we were able to put documents online, embed links, and real quickly go from one document to another just by clicks. Web 2.0 was the realization that, hey, we could pay attention to who's visiting the websites, and we could tailor the content based on, you know, who they are. Web 3.0, or the semantic web, is when the data is actually linked inside the documents, and so no longer do we imagine these maps of the Internet as the documents online being the nodes and the links inside connecting everything. We're actually attributing meaning to those links, and so we know why the documents are linking, and so the links literally point to each other. They point from data to other data, and in the same way that Web 1.0 allowed you to not care about the underlying server architecture, you were just skipping from web page to web page, not really worrying too much about what the underlying server platform was. In the Web 3.0 world, we can go from data to data and not worry too much about what document it's living in. Ultimately, what that opens up is the Web 4.0, the intelligent web, where once you have everything semantically encoded, then you can begin to put together very sophisticated AI and intelligent agents that can do a lot of the grunt work and heavy lifting of putting together weekly, monthly, yearly reports, doing the analytics, and a lot of great data science emerges from that. I threw this chart in real rapidly because during the lunch break I realized that not everyone here is fully knowledgeable about the semantic media wiki distinction between a regular wiki and a semantic wiki. And so I thought these were some words that I put together from an email that I sent to a colleague a while back. What is a wiki? Everybody asks that. We use this term. We encourage people to use them. And I thought it was really good to go back to the beginning. And I'm not super knowledgeable about the history, but I got a quote, right? Ward Cunningham, the simplest online database that could possibly work. Cutting through all of the complexities of a formal database front end, just put the database on the web and let people edit the data, the simplest online database that could possibly work. And then the semantic web is doing what I said previously, which is allowing the data to interrelate. Data that is encoded in a page can then be harvested, mined, and aggregated into other pages. So that's really what we're talking about here. So then we switch over to what is the Internet of Things. And simplifying to the extreme, we start in the lower left and we're basically building devices that we can talk to. We're connecting them on our desktops. We're connecting our phones to our laptops. We're connecting our televisions to our home internets. And then once we graduate beyond that, we're connecting things out into the worldwide web at large and things are getting unmanageable at that level. And the semantic web of Things is where machines are actually able to make sense of all the interconnected devices that are out there. This is particularly important to the enterprise world. The blue line represents the human population, a world population. And the green line represents the number of devices that are being applied to the worldwide web. And currently what we're seeing is on average about six devices per person is the Internet of Things. And very quickly we're going to go well beyond the world population in terms of devices on the net. So if we take a step back, and this just represents 30 seconds of my brainstorming of where are the user bases, right? We've got governments and private industry. There's service providers, customers, people selling things, people buying things. We have product communities that emerge, gaming communities that emerge. Any product line or industry will have a community of practitioners or people that are intensely interested in that topic and you'll get those communities. And then you also have the whole web experience of personal expression and everybody wants to share with the people that they care about. Similarly, right, if we look at data applications out there, we've got law regulatory policy, I won't do the list. But what I want to start getting to at the end of the first half of this talk is all of this, right, involves increasing multiple layers of security. And so sensitive but unclassified is a term that I still use a lot, SBU. There's just general proprietary data. If you have access to somebody's invention blueprints and you're hosting that, you have to manage that properly and so forth. We all, Tony will talk, is an expert on the controlled but unclassified information. And then even in industry, right, we have to worry about data being exposed to other countries that it's not supposed to be shared with and that's the ITAR. Ultimately, when you think of life cycle management for data, most data, we have a term called data at rest or DAR. And so you can imagine all this information being generated every day in the world ultimately becomes data at rest. But at the same time, there's also data that's not yet a historical fact. And you have things that are, you know, we have a calendar of events, things that we hope to have happen. So we have all this data that's not quite reached the data at rest point of its life cycle. And then we also have to worry about what is the retention schedule, right? There's all this information accumulating and at what point does it need to be reviewed and discarded. So okay, so now we get to the heart of it. Security and access control, Tony and I realized that this is the heart of the interface between the way MediaWiki is today and what we want it to be tomorrow. Access control is everything. There are services, I should have put infrastructure at the top. So every network begins with a foundation of infrastructure. Then there's some basic network services that go on top of that. And then there's applications on top of the services. And every one of them at every level needs to worry about confidentiality, integrity and availability. And I'm simplifying this to say that confidentiality and availability is who, when and why for access control. And integrity is how do we enforce, how do we implement the who, the when and the why, it's the how. So fortunately there is, our government has implemented an entire branch of the government for informing us of official things like how long is a meter, what is a second. So the National Institutes of Standards and Technology has been given the responsibility of coming up with security guidance and setting the standards for all government information systems. Certainly the public is not required to follow these things, but any information system in the government does. And there are three basic documents to this. FIPS 199 gives you the high level categorizations. 200 gives you the areas of concern that you can't, minimum areas of concern. And 800-53 goes into extensive detail on how you can address these issues, guidance and specific controls. Just as a simple example, this is a chart that I put together for a security presentation a while back. On the left, you have the front cover of FIPS 199. And on the right, you see the most relevant section for what we're talking about. You see confidentiality, integrity and availability defined. And you have these three categories, low, moderate and high, which represents the way that they classify these three different dimensions of IT security. So that's FIPS 199. And then FIPS 200 says that for each of these dimensions, there are 17 categories that must be addressed. And specifically when I put these charts together, we were having a discussion about whether or not we needed to have network cables in covered cable trays. So we're going to run from one information system to another somewhere else. And can we just throw cables, commingle cables in with other cables, or do we have to have a protected distribution system, as they say? So that falls under the physical and environmental protection for security. And that's just one example. Again, we're focusing on access control, which is there are 25 specific control sets that are associated with access control. And you can find this document very easily online. And then for each one of those, you visit 800-53 and it goes into endless detail about how you can look up whether it's low, moderate or high and how many of the different controls apply for your system. So stepping back for a second, if you're in the government and you're trying to bring a new information system online, the first thing you have to do is classify it as low, moderate or high. And then you have to come up with a security plan that addresses literally hundreds upon hundreds of security controls. And that's one of the big barriers that all I think enterprise or corporations have in onboarding some major new way of doing information management or knowledge capture. And so the next part that we want to talk about is, and I might have these out of order, so maybe if we get to the question area, we can come back. I wanted to say, I wanted to talk a little bit about why we would want to push for enterprise semantic media wiki in our organizations. And I just jotted down a couple of, I mean, at some level, this is the one chart that I think if you're here, right, you have a grasp of why this is valuable. The ability to model corporate data, the ability to capture data and so this is, and ultimately it's the semantic triple that breaks the relationship of the traditional corporate ontology, trying to create a Newtonian hierarchy that perfectly explains in a consistent way how every bit of corporate data can fall into its proper place. In reality, there are incompatible ontologies and the semantic triple model and the semantic web allows for incompatible ontologies to co-mingle together and the relationships between them are what you work with. So that's why we're so excited about this. Let me, I also wanted to spend one moment here and connect it back with the Internet of Things. So if you think about all the devices, one of the earliest realizations that we want people to have with the semantic media wiki is this idea of the semantic triple and that the user is the first object known to the database and everything can then be in relation to the user. But when we get into an Internet of Things, we're also going to have a page for every device that the site would have any access or knowledge of. And so we can build this relationship between subject, object and predicate where even the predicates, which is has title as a predicate would be something that you could analyze and talk about as an object. The concept of has title is an object in this paradigm. Okay, so the challenges are, and this is again just a real quick brainstorming of what the challenges would be, is that the users that, you know, those of us that are the technologists, the ontologists and the people that we want using these systems if we can have them implemented, they are not the CIOs and they don't have any responsibility that the CIO has, the chief information officer. And so the chief information officer is the person who would be in trouble if your organization leaked data that it was not supposed to leak. And so first you need a business rationale and then you have to be able to convince the chief information officer that this application will address the confidentiality, the integrity and the availability of the data at the levels for which it's been classified. Ultimately, enterprise CIOs, they want to purchase the infrastructure services and applications to meet their security plan. They don't want to have to create a security plan for a new piece of technology. CIOs are responsible for ensuring that the applications have training available to the enterprise users. And so if you do select a new software platform and you roll it out, you've got an entire enterprise of employees that are going to be wondering what they're supposed to do with it. So then there's the training piece and the idea that people would just gravitate towards it and figure it out is not realistic. These are the main reasons why the chief information officer of an enterprise or a large organization would today be hesitant to choose MediaWiki and we're hoping to work with you guys and change that. So this is an intentionally left blank section of the presentation. I just want everyone to sort of ponder, do we want enterprise MediaWiki? Do we want Semantic MediaWiki to be the de facto standard for the secure Web 4.0 semantic web of things? I do, right? So that's where we're advocating that something, as it is today, we have to address the topic of access control. What we're ultimately trying to encourage is that the MediaWiki community at whatever level, whether it's the Wikimedia Foundation or whether it's this group of programmers and developers, we'd like to change the official stance from if it's a CMS that you need, you might want to think of a different application. We would love to hear the phrase, if you need access control, here's what you do. We would love for this community to endorse a short set of extensions that, when configured properly, provide access controls that meet the NIST access control control sets. We would like this community to value the impact to security as much as they do, or my own personal observation, localization for extensions. I actually tried to make an extension about nine years ago and failed miserably to localize it properly, and that was all I heard from the community. Localization is important. It's what allows internationality, if I'm saying that right. It is important that extensions have the ability to support internationality, but it's also important, it's a value that I would hope that the community would support, that security implications of extensions are monitored and paid attention to, and maybe they are, right? So I'm not the person to assess that. And then lastly, we would love for this community to provide any enterprise chief information officer with a security model that they can implement, showcase a moderate, an installation of MediaWiki that meets the moderate classification of the NIST document, and I think you would see enterprise adoption surge. A couple of concluding remarks. These are just my own thoughts. So at present, a MediaWiki site running with or without third-party extensions can only be used as a NIST moderate information system as long as it is fully contained within a portal. And that's tragic, because yes, you can deploy an enterprise MediaWiki, but you wrap it in a portal and don't let it talk to anything else. And so if I run back to one of the earliest charts, we have a technology that can operate up here, and because we can't do access control, it's forced to live in a box down here. Okay. Thank you. I'll just add something. Just go back to your last slides. The NIST moderate is the baseline for all the CUE enforcement that's going to happen. Student records are CUE. I don't know if you understand. All medical records are CUE. It's controlled, unclassified information. There are 100 categories of this. Even information about historic properties in the National Park Service. It's got a CUE label to it. There are all sorts of things that are going to kick in the requirement, the baseline requirement, which is moderate. So if you're going to store any of this stuff, you're going to start with moderate, and then the more exciting stuff goes up and requires higher. But the moment you've got some of this information in a federal agency or in a contractor who's dealing with the federal government, and they carry it, there's another one, 171, which is the guidance to the government contractors as to how they've got to behave with the CUE information. So this is going to kick in, and we're going to suddenly see, wait a minute, you have to assess every system to find out what it's got on it. And if it's got CUE, you have to implement these controls. And that was the survey mechanism that I was talking about earlier, which is the gathering process to find out what's going to be on a system and therefore what controls you've got to put on. And so we feel that as this gets escalated and enforcement is just kicking in right now, this next year I think is the year that compliance is required in the federal government. And so you're going to see this kicking in, and so you're going to see the use of all these types of systems saying you better have the controls in place. So that is our presentation. We have about five minutes left of our allotted time. Questions? Thank you very much for that. Certainly from my perspective, I 100% agree that there is absolutely a user case for the requirement to split up data, and especially as well with GDPR coming in Europe, again, exactly the same thing. Are we in a similar situation in terms of the internet was always designed to be open and it's more difficult now to retrospectively secure? Is that the same with Media Wikipedia? Are we fundamentally breaking how it is designed, or is this a relatively simple fix? So I don't know the answer to whether it's a relatively simple fix or not. I am not the Media Wiki expert. Hi, I'm a Media Wiki developer. So for most ACL type stuff in Media Wiki, it's not that bad, but in terms of read restrictions, there's a lot of places that make simplifying assumptions that either like everyone can read, or if there's some restrictions of reads that split into a binary group of this group can do nothing, and this group can do everything. Once you start getting into very limited read things where people can read and also have access to all the special pages and the API, there's quite a few things that need to change in Media Wiki to make that happen safely. Just a comment. What you're going to find is if you've got a mixture of these markings on your Wiki, you're going to make a decision based on the marking of the content. It's not an overall decision. So if this content is marked at a certain level, then there are certain controls that are required to do it. So we're going to have to get to fine-grained access on attribute-based control on the content. If it was in terms of are you allowed to edit this page, that could be done in Media Wiki not that hard. But in terms of can you view this page, and not just can you directly view this page, there's no hacky backdoor way of getting what is on this page. At the moment, that's not really like in terms of our parser. It assumes that the current user can view all the templates and everything. There's a lot of simplifying assumptions there that would need to be changed. If you're talking about storing mixed classifications on a single system, I don't believe that you can get there. And the reason I don't believe you can get there is because you have data at rest requirements. And then there's all sorts of special pages. If somebody can get into the images folder, for example. And then you've got portion marking requirements. I think you're really opening Pandora's box in terms of what I believe you're trying to say. We've gone a different route. It's actually just a matter of implementing Wikis as a whole. Rather than trying to get to share the NASA model, trying to reduce, we've actually gone the other way. I believe it's the only way that you can actually meet the requirements for encrypting data at rest, for example. Access control issues. You get into the need to know requirements, stuff like that. And you open a hell of a lot more worms than you initially think of. So we went down that same road. Nonetheless, what's the famous Galileo Latin quote and yet it moves? So the previous talk spoke about the relationship age. And in lieu of MediaWiki making itself capable of operating at this level, there will be for-profit solution providers and they will own the day. So it really comes down to a choice. I believe that the technical solution to this probably lies somewhere in the OAuth 2 area. In other words, of scope definitions, token issues by an authorization server and some of that piece built into the foundation of MediaWiki. I just wanted to say, I don't think anyone objects to making MediaWiki be useful this purpose. But in terms of most developers, this would be very low down on things that no one at the Wikimedia Foundation is going to work on this. But I suspect nobody in principle objects. There's some big political statements about, oh, this is not how Wikis work. Wikis are, everyone can edit and do everything. But there's no objection to making MediaWiki be able to do this thing. It's a little complicated because it digs deep into how MediaWiki works. But if someone wanted to go do it, I can't imagine people would object to merging the patches. True. I'm no expert in all of these regulations, but at the same time that it seems like overkill to me and I'm thinking literally of my son's high school information system that is so terribly bad and you can't get any information out of it, it must be compliant with all these regulations. Very secure, with an agf, right? But I do know a little bit about network technology and MPLS networks and tagging individual packets and stuff and scalability goes to hell. But if you can actually tag content with a user ID and their authorization level and kind of filter stuff out like that, maybe that's like some kind of technical solution. I'm probably trying to agree. The government has this problem period and it's keeping information secret in an information age. It's a problem across the board and that's why I have control lists that will fill up a three-way binder that's four inches thick. It's a fundamental issue and I believe it's actually contrary to what you've designed to do. I don't have a solution. I'm just trying to think like... If I may interrupt, contrary to what the Wiki was designed to do, in the sense that the Wiki is designed to be as open as possible and the emphasis on being able to perform access control has never been a configuration... that capability has never been designed into the requirements for media Wiki. There are ways to do it and so the message that I have is that I'm actually not even advocating a change in media Wiki. What I'm advocating for is there an example of how media Wiki as an application is implemented with all of the underlying services and infrastructure such that when an enterprise chief information officer is doing a trade study between different packages that they might select, we would love to see media Wiki on the board with the contingencies that you're doing these other things outside of media Wiki. Forgive me if I've given the message that I want media Wiki to be a complete solution. What I'm saying is that the community, I think, especially this enterprise media Wiki community should present some kind of finished product where the application is living in an overall design package that includes the server configuration and includes maybe some other things. I think that's what the enterprise, what the corporate CIO doesn't want to do is take on an IT design project and if this community could provide that design project that includes media Wiki in its present state, I'm saying that would be a home run. Well, as far as read control, it seems to me that the elephant in the room, so to speak, is semantic media Wiki itself which doesn't have any read control and that's not really something in semantic media Wiki cargo has the same problem. I think it would require fundamental restructuring of the storage to enable that which it just seems like a complete non-starter to me. I don't know if you have thought about that at all. Well, I haven't. Again, where we're at and where I think my NASA colleagues are is we've got these things working securely in a portal environment. Is that a fair thing to say that we create the portal, we install media Wiki in the portal? As a whole. Yes. You're in or you're out. Yes, you're in or you're out. Right. And sadly, that then, when you apply the need to know principle, cuts a lot of people out. Yes. I was just going to add that I think in our experience, the statement of what you just said of the need to know principle, it ends up cutting a lot of people out. I would actually say it ends up challenging management and administrators to reassess what that means and that they actually needed to be including people who originally were being excluded by that whole system of multiple individual document level permissions. You're very right. It's absolutely need to know is an organizationally interpreted statement. In the security world, it's least privileged. That's the way it's expressed basically. And it's fuzzy. That's the problem. Cindy. Just want to say first of all, there's a question from the YouTube audience. And I think it's been answered already, but would love to hear Cindy's, which is why I'm reading this, or generally Wikimedia Foundation's comments on the security topic. And I think Brian did a good job, a very good job of covering that and just wanted to give my two cents, especially since I got named specifically. But yeah, I agree that our solution to this problem for our government customers in the past has always been the portal-based, a single Wiki per, the Wiki was the element of control. And it wasn't designed to have a lower level of granularity for access control. And if you really want to be secure. So as Brian said, I don't think anybody is averse to having more secure solution with the understanding that the base media Wiki, of course, needs to scale to Wikipedia scale. And so a lot of these security solutions would include a lot of cost in performance. But having a configuration, and there are multiple configurations with the Wiki as the container, or the element of security, that have been certified to be acceptable for government use. And a containerized approach preconfigured with extensions that is pentested and secure that could be a starting point for folks is something the third party community could come together and develop. Since I have Brian here, I just wanted to, I had an idea about how content, the content handler could be abused to stop transclusion and that sort of thing, which I think is typically the largest hole people have for access control. And that was just an idea that I had on the technical level of how to address the access when you're talking about the container being the page. So do you have any thoughts on that? Well, you could do a lot of things with content handler, but it mostly comes down to you're either using the Wiki text built in parser, or you're writing your own. So I don't really know if that would fix the transclusion problem because you either have to fix it in the general parser and then you're back to where you start, which is actually something I kind of think we should do for other reasons, or you're writing your own parser and in which case that's a big effort. But I think the parser demands ways that the content, the content can tell the parser not to transclude, or at least I know. That's what I'm talking about. And it's probably not, it may get us from the PC era up to Web 2.0, but yeah. So yeah, the content handler can specify how transclusions work, but if I remember correctly, which I may not be, I don't believe like state was included in that decision, so it wasn't something you could do like per user. More generally though, like on this topic, keep in mind like things change. I was thinking about like originally, I don't believe blocking users was part of Media Wiki. Like back in the day, this was before my time, so I may have this wrong. If someone was vandalizing articles, you had to get someone to go in with like shell access to like ban them. So like, you know, now we have blocking people, now we have a lot of complex permissions as they became necessary for Wikipedia as a group. So there's no rule that Media Wiki can't change, right? So I think when you get into discussions about fine-grained access control and when you also discuss the different layers of data protection that can be applied, it's important to think back to the origins of this. You know, think back to the 1950s when you just had different stacks of documents and one set of papers might have been tagged as some level. Another stack of papers might have been tagged slightly less protected and then these two pages here are completely free to read. But they were stamped as whole, right? And so the reason that we at NASA have gone with the whole Wiki approach is that we consider a Wiki a document, a set of 1,050 pages or whatever. So it sort of simplifies it for us. But if you want to go down the road of trying to split up access within a Wiki, it becomes really challenging because what is actually protected? If you have this object, this thing, like this laptop or whatever or some GPS antenna, the fact that it's the color gray is not protected. The fact that it has, you know, that it's 12 inches long is not protected. But the details of how the code is written to determine exactly where the space station is at that moment or the chemistry behind how this component works is protected. And so how do you break it down to every little itty-bitty property that's within a Wiki and somehow control all that? That's why to me it just blows my mind that you could ever get a solution for that. I think the granularity, if we look at the regulations of what's coming through here, it's kind of at the document level and it covers paper as well as electronic. So it's probable that the markings have to be done at the page level and that anything, whether it links to something else that's at a different marking or not, I don't know, but that's probably the granularity based on what we're seeing on the regulations coming through. The marking actually has to be electronically in the page. I'd like to add also, when you start to get granularity, sometimes you can combine data in ways that is surprising. There's the usual example with the census. If you take a census, maybe how many people of this religion are in this area, maybe that's not sensitive. And how many people live in this neighborhood? Maybe that's not sensitive. And how many people have red hair? Maybe that's not sensitive. And how many people are over 30? That's not sensitive. How many people are over 30 of this specific religion with red hair? Maybe that is sensitive because now you know that only your neighbor is over 30 with red hair and now you know her secret religion that's like persecuted. What I mean by combining data can lead to things that don't seem sensitive at first but really are. I think at the highest level this is a challenge of the age we live in. And I'm not... I'm only trying to say that this problem will be solved. And there's an opportunity. Sorry, it's funny that this comes up now because it's something that I was talking to a Google colleague of mine. Have you heard of homomorphic encryption? I have not. So homomorphic encryption is it allows you to have two encrypted numbers and perform operations on it and then get an encrypted result. And that result is correct. Although the machine performing the operation does not know what it is actually working with. So 2 plus 2 equals 4 but the machine doesn't know that it was handling 2 and 2. It was just adding something. So our idea, or it's pretty wild, you encrypt, you give private keys to each user and encrypt every single object in the information base with each potential user's private key. And then for example the search engine will, you have to log into the search engine and it will have your public key to decrypt and it will do homomorphic operations and have results and then check whether you are entitled to decrypt these results and only then show it to you. We have no idea whether that is practical but it would put the encryption and the access control on the individual object. And I was reminded for this when, I think it was you Greg who mentioned, you know, tagging packages and we're packets, yeah packets. But, you know, this Google guy he knows a lot more about systems engineering than I do but he says it sounds very complicated but it may be we could give it a try. But there's a Wikipedia article on homomorphic encryption and you want to have a look at that. Thank you, I will. So just a comment to the organizer that we are well into the red. I just want to mention homomorphic encryption is very, very cool but it's also very, very inefficient. Like it's an active area of research of course but like you're talking like 20 minutes to add two numbers together. I don't know, I don't follow that closely but like it's not anywhere near like efficient enough except for in very special cases. First of all, we'll make it practical and vital. Maybe. What we need is a solution that meets the criteria and the controls and the simplest and the quickest, the better. That meets the control. So we're not really in, we can't modify the controls as defined by NIST. So we've just got to meet the requirement. Well, so we didn't want to have to end on anything that wasn't a positive. Again, there are lots of media wikis that are being deployed in enterprise environments where the people who've done it have done all the hard work of meeting the site security plan. And so I'll leave or I'll end with maybe just rephrasing my main hope and that is that this community can provide a template for how everyone could do that without having to make it a homework assignment for themselves. Okay, thank you very much.