 All right, well, let's maybe kick things off here. Welcome to one of the most enigmatically titled sessions here, behind only perhaps no. I am Seth Gregory. I'm the practice lead at Navigation Arts. I run the Drupal team there. To my right over here is Ted Slasinski, one of our senior developers. And we're here to talk to you today about how Drupal secured the defense sector. So what it actually is, for those of you who've read the description, was a really big Drupal intranet build for a security sector company. We do have an alternate title for the presentation, too. So in the security sector, in the defense sector, there's a lot of things that they don't want you to talk about. And incidentally, one of those things here is the name of the company. So we won't be talking about the name of the client, and per their request, we won't be able to walk through or show you the site, but we're going to talk about a lot of interesting stuff that we ran into while we were developing the site. So we're going to follow just kind of a basic outline here of talking about the client and what we actually can say about them, some of their background, the problems they were dealing with. We're going to take a look at some of the business aspects of the project and details of the solution. So we'll kind of break it down into a business analysis portion, and then we'll get into the challenges, where it's kind of unfair to say the challenges, because like every project, there are an awful lot of challenges here. But we'll get into the technical weeds with a handful of challenges that we came up against, some of the ones that really stood out. And just as a quick overview, stuff like legacy browser support in a big organization, the challenges of working in a super ultra-secured environment for the networks and servers, performance concerns and issues, user authentication, single sign-on, and then this, the big one, a separate extra net version of the site. So let's jump into the client, and you know, who is this mystery client? Well, stop secret. But we know that they are a defense contractor who is a massive multinational corporation. 120,000 or more global employees, including contractors and people who are not full-time employees who need access to this information. They're composed of many discrete business units across the organization, so people who deal with things in different areas of the sector. And each of those business units had its own intranet. A lot of times, more than one, they might have an HR intranet and a developer's intranet and a place where they store all their documents, and it was just massive, with no way to easily share information across all of these disparate parts of the organization. So change was needed. Clearly, they knew it, we knew it. They came to us because they are one globally distributed international corporation with more than 60 existing individual intranets, serving 120,000 people. And across all of those, zero unified designs are branding or functionality that's common across the enterprise. We had an anecdote from them that they actually had a culture of one-upsmanship with their designers, where one of these groups would put up a new intranet and they'd spend all this time and effort on a new design that had nothing to do with the corporate branding. And they'd say, hey, check out this cool site, and then somebody else would go and do one better, and they're spinning their wheels, spending a lot of time doing this, and nobody could find anything. So they needed a solution, and the solution was one intranet to rule them all. To take the place, really, of all these individual siloed sites, and it was not a new idea there. They actually tried three times before in the past decade, and every single time before it gained critical mass to take off with consensus, they failed because of that lack of consensus. A large number of stakeholders are involved here, and I love getting a load of the ring stuff in here, but it's kind of apropos with the ring because whenever things are centralized, and I experienced this in the world of academia as well, there's this hesitation from the individual groups that they don't want to lose their identity or feel like they're being strong-armed by corporate into a single solution. And they were having a lot of trouble overcoming that, so they needed some help, and they engaged us at NavArts to help them get through that. So the company really had five main stated objectives that they came to us with. They wanted to facilitate internal communication and employee engagement. They wanted to improve productivity because who doesn't. They wanted to reflect and confirm their internal corporate culture, reduce information silos like we talked about with all of those 60 separate intranets, and along with that, there's some knowledge sharing and management so that something awesome is happening over here and it's on that intranet, but the people over here have no idea how can they take advantage of that, and it's really important to share that knowledge within the corporation. So to do this, they knew that they needed a CMS, but what CMS are they going to go with? Drupal was not a given at all in the early stages. Like many of these massive corporations, it's very much a Windows world over there. They have a heavy existing investment in SharePoint, and in fact some of those failed attempts that they had tried before were based on SharePoint. It's also a very active relationship with Adobe. CQ sites across the organization, so CQ5 was very much in the mix in the running, and part of it really just comes from having very little prior exposure to Drupal. They were skeptical of its ability to drive a massive enterprise intranet like the one they were envisioning, and they had concerns with security. I mean, is this safe from what I hear? Drupal is this free open-source software that people from the internet work on and contribute to as a community. And so what we did was we had a list of over 500 requirements that we had to pour through and figure out what was the right solution for this. And we knew for what they wanted, Drupal was the right way to go, but we had to convince them that it was up to the task. So one of the things we do at NavArts, because we're not just a Drupal shop, we have a site core practice and a Java practice and a few other CMSs that we deal with, we do CMS evaluations for clients and look at their requirements and say, okay, what's going to be best for this? What's going to be best for that? And kind of score it at the end. And Drupal came away as the clear winner from that kind of bake-off that we did. And that gave it a big boost. They were leaning toward Drupal, but it was still a bit of an uphill climb. So there was still some work to do to really secure this project for as a Drupal project. So Dries the other day talked about Drupal standing on the shoulders of giants, and I think in the enterprise space, we really stand on the shoulders of some early adopters and other big mega-corporations like Drupal, so pointing out big clients like NBCUniversal and Tesla. And obviously the White House is something that's always brought up. I mean, if it's good enough for the president, it should be good enough for you. So that really helped us out. But we still had to convince their security team that Drupal was secure. The Drupal security team themselves really do an awful lot of work and go a long way toward answering those questions for us. I mean, they're constantly watching and responding as well as the community with alerts and patches and things that help make that less of an issue than it is with some other open-source CMS products. But despite that, all software that's presented to this company for installation, whether it's a patchy or MySQL or an individual Drupal module needed to be vetted and approved. The actual versions of it, they had to run static analysis against it and things like that, so it was pretty intense. And we held a lot of rounds of demos with the stakeholders. So there were a lot of stakeholders involved from each of these different groups. And a lot of times people say, oh, Drupal, I hear Drupal is really good, but I don't know what it even is or looks like. So doing that for people around the world means screen shares, means in-person meetings, and I think after they really got a hands-on feel for it and what the authoring experience would be like compared to what they might be used to with SharePoint, et cetera, they were not only accepting of Drupal as a solution, but they were really excited for it. So we were psyched about that. So with any CMS, the content is centrally important. It is a content management system, and they had a ton of content and a lot of opinions on how it should be structured, how it should be displayed. So that list of 500 requirements I was talking about and the dozens of stakeholders that contributed to it, in some cases they were mutually exclusive requirements. They conflicted with each other. They had to be worked through. And even with a client who's great to work with, when you have that many people and that much consensus to gather, it can be, I think, hurting cats is the term that comes to mind. So before the technical stuff was even a possibility. Big shout out to my colleagues back at NavArts, the people who were the designers, the IAs and BAs and project managers who went through round after round of design and client discussion to make that happen. The key for us to getting this past that stage where it failed so many times before for them was really making a system that was painless to manage. That it was okay for some of these people who had their own internets to lose some breadth of functionality if the quality and depth of the functionality they were gaining as a whole made up for it. So we had to standardize on a set of content types that would actually accurately represent all that content that they had from all the different business units. We had to talk about how that content would interrelate with each other, those pieces of content. And the taxonomy, it was a long process. Personalization, where I'm sure all of you out there who are building sites are hearing this more and more, we need personalization on our sites. We need to have content that finds the user. So for instance, on the homepage, they wanted to have content that was relevant to the logged in user's geographic location to their business unit. Where did they work? You know, show me blog posts from my local leadership, but corporate also wants to be able to push out communications to anybody across the organization and there was just no way to do that before. And finally, they really had this need for what they called one-click functionality. Things that a lot of us take for granted and have for years like, oh, I'm viewing an event. Let me click this button to add it to my Outlook calendar. Well, that was hard for them because there was no central events calendar. There was no nothing like that. A really good example of that is the time card system for reporting their time. There were more than 15 different time card systems across this organization and you had to dig to find which one and where. So even just something as simple as providing a single link that said time card that would use the user's attributes to figure out which one should I redirect them to was a huge win for them. It's really bad when you lose productivity by trying to find the place where you're supposed to report the hours that you spent working. So I just took a drink at a really awkward time but I'm going to introduce my colleague, Ted. I'll come up and talk about the presentation stuff. Thank you, Seth. So the presentation, they wanted a personal experience. Things like whether based on facility location, news based on location, or news based on business area. We also gave employees transparency. The ability to view content of other business areas to support the company's effort to create a one company culture. We were able to give their editors an organized library for their pictures and video and allowed for consistent presentation when used in content like articles or blog posts and we gave them the ability to search it by its field information that you're able to compile from their exit data. The power of C-Tools provided granular presentation of content. We used Panels, which integrates great with views. So Panels is great. It provided a number of benefits. In order to provide dynamic user-centric content we used context-based panel panes that display information based on users' attributes like facility or business area. We also created custom panels layouts so we were able to customize the markup in the CSS. This allowed us to create responsive layouts with semantic HTML5 markup on the client side and it also made it easy for administrators and content publishers in the UI while giving them the customization and the flexibility that they needed. So the front, we built responsive themes so that employees are now able to easily view the internet on company phones and tablets. We simplified the whole theming process using style sheets, pre-process with SAS and Compass and we designed it for modern browsers to gracefully degrade using JavaScript libraries like Modernizer and HTML5 Shiv. We created view modes. We used custom templates for displaying section 508 compliant content and anytime we showed summarized displays of node content we were able to reuse it anywhere. We used views or panels. So the challenges. They needed legacy support. They had complex servers. They had detailed specs for performance. They had specific requirements for authentication and they wanted an extra net. They wanted users outside of their internal network to be able to view content of the site. So unfortunately, even though the platform was designed to gracefully degrade at launch the employees issued computers had Internet Explorer 8 installed by default and they did not have Windows permissions to upgrade. So the company was aware of this and working to roll out IE updates but we weren't able to expedite this process before launch. So we had to include full support for every browser. So this presented some interesting challenges. For example, video.js was used for handling video. Forex only supports IE9 and up. So we did have to provide a fallback for IE8 during the period of time that employees were on this browser. So this is a little different from normal sites that we work on. We normally can SSH right into the servers. So not being able to directly SSH in we had to go through several authentication layers more than one login to get in. Initial testing of code was done on our own company servers and we didn't have the ability to replicate some of their exact authentication and hardware. We used install profiles, features, and migrate scripts to provide content so all developers were able to see the same thing. And we tested the basic functionality on their machines initially as a proof of concept and when we knew how their systems work we were able to build our functionality on top. And I'm gonna... So this is different from most sites we build in Drupal. All user traffic was authenticated. So we weren't able to do full page caching. We weren't able to use a reverse proxy like varnish. And another thing is their start page this would be their start page for all of their web browser installations. So this created an interesting scenario where they had these massive login waves whenever their users logged into their machines at the start of the business day from a certain time zone. So I'm gonna pass it back to Seth to tell you how we did that. Thanks, Ted. So yeah, this... You know, we got some traffic statistics from them your average day, your average week of traffic and all of that kind of went out the window once they said, well, we're planning on making this our start page for all of our users then it became a fire drill, right? 8 a.m. East Coast comes online 20,000 users hit your site in the course of five minutes even if they're just going to check their email they're opening up the web browser they're authenticating and logging in. So we had to think about performance tuning quite a bit. Really with any Drupal site the database backend even if you're using load balanced web nodes becomes kind of a single point of failure and something that you rely on. So we had a dedicated MySQL server very beefy chunk of RAM allocated so that we could keep as much in memory as possible and we worked with their server people to set up and run load tests that would simulate some of these massive login waves in short periods of time and while they were running those and after we looked at log files and we used some tools like some of you might have used MySQL Tuner Perl script or MySQL Tuning Primer I think it's going to be really hard to see but an example of the kind of output that you get from something like that will show you the percentage of reads to writes that you're seeing things that might need some indexes added to them manually so you can really hone in on where the issues are and how your config is for MySQL and tweak that to improve performance. So we did a lot of work just tweaking what we had adding more and more balanced servers and the load balanced web nodes anybody who's done load balanced Drupal before knows that there can be tricky sometimes you can have a shared files area it was on a mounted NetApp backend all kinds of fun things to think about with the load balancing algorithms and round robin versus sticky sessions continuing on that kind of lack of alleviating some of the load from MySQL we used Memcache we moved all as many of the cache tables as we could into Memcache so that was cached in memory instead of hitting the database and distributed it amongst all the load balanced web nodes so that everything was consistent across them and if one of them dropped off the rest would pick up the load perhaps one of the biggest performance gains we got Ted mentioned that we weren't able to put varnish in front of this and we weren't able to turn on Drupal's page caching but Panel's hash cache is awesome this basically lets you take each of those individual little panel panes on your page and cache them for some subset of users based on a key that you can define so you can say okay everybody whose home facility is in Los Angeles and is in the Drupal department at this company is going to see this pane or that combination of users and show that to them instead of regenerating it every time and that cut down page load time significantly particularly for things like those five minute login periods where the content hasn't changed it's just everybody getting served the same thing another thing that really helped out with alleviating the load from MySQL was actually an unintended side effect of using search API so a lot of the views on the site surfaced content that needed to be searchable or filterable so for instance the list of news articles you might say show me all the news articles that have to do with Drupal and the Los Angeles facility and it's going to do a faceted filtering of those and cut down the result set if that view were just a normal node view it's going against the database they're constantly hitting that it's not cached and it drags down your performance on the MySQL side if you use search API to build those views it's hitting the solar index for those instead and that's going to alleviate some of that pressure from your MySQL database now initially the reason we went with search API is this company actually uses a search technology from a company called Coveo across their organization and they do have a desire and a need to incorporate that down the line in a later phase it wasn't going to make it either in time or in budget for phase one but because we implemented solar via search API versus directly with the solar module ideally if a Coveo search API plugin is created we can just swap that out because the search API will be the same and the views will work the same you've just got a different back end that's going to swap one for the other so that's something that we're looking ahead to for a future phase so the next really big challenge was user authentication this is actually something that they were told would not be feasible by several vendors they really just it goes along with that same theme of making this painless for people they didn't want especially at a company like this where they've got special badges and three levels of VPN access and credentials coming out of their ears they didn't want people to have to remember another username and password or manage that separately so no separate Drupal credentials they wanted to use claims based authentication so claims for anybody who hasn't worked with them it's basically Microsoft's term for SAML 2.0 they wanted to go against that identity provider to get that and they wanted to have all of these accounts be either pre-provisioned or automatically provisioned and they also wanted to leverage this so that it was not only authentication but it helped drive that personalization too so when you get that claim token back from Active Directory from the ADFS server you get not only the authentication information but any user attributes that might come along with it for your application so with that we can grab user's name, email address where they work what business unit they work in and on each login is different from what's been created in the Drupal user object it's going to overwrite it and update it so you go off and get married or I guess divorced and your last name changes it's going to update in the system automatically single sign-on it's kind of something that whenever anybody says SSO you kind of shiver and say okay well what do you mean by that single sign-on gets confused with federated login a lot but really when so many of the sites are using ADFS to do their authentication what they mean is that they want to make sure it works with the integrated Windows authentication so you've already logged into your laptop you're sitting at your desk you're already in the domain when I hit this site I shouldn't even see a login form it just knows who I am so it was important for us to support that and it makes it so that it's not this other site I'm going to it's just this thing that's already there and I need to think about it so again really pushing this idea of low barrier to entry not that people can't handle it but it's just extra stuff I need to think about don't make me think so we used simple SAML PHP to help us do this with ADFS we wanted to keep it simple no active connections to active directory no LDAP queries not only is it but there's more security to think about there and they wanted to keep this secure so we don't have privileged access it directly we hit the identity provider when we first try to log in we get back a token with that claim and the attributes and we parse it and do what we need to you can run through it but if folks aren't generally familiar with the way that this kind of authentication works the service provider is your website the identity provider is the ADFS server and you create a trust between those that there's a trusted connection there and they know that if the identity provider says yep this person is who they say they are then they're going to authenticate you and send back that information to the service provider letting you log in people did still have Drupal side user accounts they were transparently logged into them behind the scenes based on the information that came back but people never had to think about that another really big win with that is that it removes the burden of managing the users from the Drupal side you don't have to go in and deactivate Drupal user accounts when a person leaves because they're just not going to be able to get through the authentication anymore so even though they still have a Drupal side user account and they do have a process to go and clean those up in batches they don't need to do it immediately because they're just not going to be able to get into the account anymore so that brings us to the big Mount Everest of our challenges which is the extranet so this was actually a really late breaking requirement we had gone through several rounds of looking at all their requirements and they said oh yeah by the way we need this other site and it's going to be an extranet that's going to have basically the same content as our intranet but it's going to be separate it's going to be for contractors for people who aren't part of our internal network for people who aren't on VPN so what does that need to look like so content from the intranet needed to be available in real time put it in quotes close enough and the biggest difference is that content that was entered into the intranet and marked as proprietary containing stuff that should really only be accessible to full employees of the company shouldn't be accessible on the extranet so we're thinking okay we can do that in Drupal pretty easily with views and filters we can use some kind of context to tell where you're coming from show you this content but not that content so we talked more with them and okay we need a separate user base we don't want the user tables to be the same okay and then we need complete system and network separation so not only can you not display any of that content the networks and the servers that the extranet site is on can't have any way to ever accidentally access that data they can't even be there at all so how are we going to do that they started talking about database replications well you don't want to get into that with Drupal and then finally they wanted bidirectional sync so somebody sitting at her desk in the company on the intranet sees a cool news article and leaves a comment contractor sitting at home is looking at the same news article it's not proprietary it's there and sees her comment pop up he can leave his own on this completely separate Drupal site and that needs to pop up on her screen too so I mean at this point we're just okay we're just going to let that simmer for a second so how can we make this work as developers there's a few things that really speak to us one of them being free beer but challenges are another so challenge accepted we were actually told by several of our partners and experts that the approach is this is not a good idea you should just go back and get them to change their requirement they said okay well I don't know if we can do that but I think we can make this work so the intranet is the point of entry people are entering their content on the intranet what happens when that happens how do we get it over to this other site so it's a node we're saving a node we just send it over with a custom web service endpoint okay well we need to make sure that it actually gets there so we got to put some kind of message queue in there we can't just send it all out at times out never got there sorry so we put RabbitMQ into the mix as a message queue to hold those messages and make sure they get there and then once they're in RabbitMQ we're going to have to have some kind of background processes that run that put it into the queue process it on the other side so I wish I had like a laser pointer or something that I could go through this but I don't know how visible it is from the back but this is kind of a basic architecture of how we envisioned this working where on the left side is the intranet zone the right side is the extranet zone the primary driving principle here is that the extranet zone cannot ever be allowed to reach into the intranet zone it all has to flow in the opposite direction so you are an editor and you go in and create a news article on the intranet it's the little blue boxes where it says Drupal when you save it the normal things happen it gets entered into the database as a new node it's totally asynchronous so control is returned to the user as far as you know you've done your job but at the same time it's going through and packaging up what you just did using Drupal's QAPI onto a Q table and dropping it there ready to go over the wire some background processes are watching that they see it, they pull it out and send it using AMQP which is the transport protocol that RabbitMQ uses over to this RabbitMQ server which lives in the extranet zone it's reaching into the extranet not the other way around background processes are running there too it sees something change on that Q pulls it off and then generates a web service call to itself using mostly what comes packaged along with services there are node services that work great had to tweak them a little bit but a lot of that stuff just came out of the box with services so that generates that web services call and creates that node and okay now it's there on the extranet same thing happens if you delete a node of course nothing is quite that easy the same thing by the way happens in reverse so if you were to create that comment on the extranet it would put it into the RabbitMQ Q on its side and the intranet is just checking that from where it is over here so it doesn't have to reach out to the intranet to put that in so it turns out people do a whole lot more than just create nodes this is one of those things that you think oh well this is straightforward and then you realize oh there's a lot of edge cases we didn't think about so we created a table here that basically says okay what happens when you insert a new node right we won't go through every cell of the matrix here so don't get too scared but if it's unrestricted and it's meant to be on the extranet it's going to generate a put to push it over there side node people are well the people who've worked with services before why put why not post well we had to use UUID to coordinate this end and that end because the node IDs are going to be different these are totally separate Drupal installations so how can we correlate the story on the intranet and the story on the extranet UUID and that required us to use put the if you're creating a new piece of content that's unrestricted it's going to put it over there creating a new piece of content that is restricted it's not going to do anything but there's more here than just creation and deletion and editing maybe you pushed out a news article and marked it as unrestricted went over to the extranet suddenly you realize oh I probably shouldn't have mentioned that new secret project and you have to run in there and quickly change it to be restricted that needs to know to generate a delete message to the extranet as well so there's a lot of these edge cases that we had to think through about files one of the things that we had to really think about is how do you serialize a node how do you get its entire object graph of everything that it touches for an entity and then send it over we've got embedded images in our content do we do all of that when we send that node over and some of those images might contain proprietary content as well so what we ended up doing was saying when somebody uploads a file we'll sync it over at that point assign it a UUID and then when the node that references it gets synced over if it's there it's there if it's not it's not you have some checkings that you can't create a story that references something that's proprietary it's complicated that's the basic idea so this is just kind of speaking to some of those edge cases there are a lot of things that we had to do in addition to that and things that we didn't think about at first in addition to making sure it's not proprietary kind of have to remove all the workbench and scheduling attributes from the node because that got totally messed up when it was sent over to another environment and there's really no reason to have draft content over there anyway so if you unpublish content there's no benefit to having unpublished content sit on this marionette controlled server that nobody's actually logging into to edit content on the challenges of having a separate user base are pretty significant too nodes can't be owned by the same users that they were when they were on the intranet so everything got set to the anonymous user when it's sent over we used JSON encoding to serialize it and send it over and our sync commands to send the files over one other thing that we had to take into account was a full synchronization so this works great once both of these sites are bootstrapped and up and running but at some point everybody was putting content into this one site and we didn't have an extra net running yet but we got to somehow have some full sync process that is used not only in that sense but if something ever goes wrong and they need to make sure the two are in sync once again that full sync will take care of that super tricky with entity references chicken and egg problems where things have circular references the half sync something over and do it again it's a fun problem to figure out so finally the comment sync that bidirectional syncing again really tricky because of the separate user bases we can do the same thing and set comments to be owned by anonymous but if you're looking at a defense company's intranet and all of their comments in the stories you don't want to see a whole lot of comments by anonymous with either a lowercase or a capital A so instead we added a field of comments so that when somebody left a comment it grabbed their user attributes on that environment they were on put it into just a static table and when we display the comments we display that field instead of the actual user attributes we also had to make sure that the sync processes would only sync things like comments over we never wanted to run into a situation even if people weren't editing content directly on this extra net site that they accidentally saved something and overwrote in the opposite direction so content entity content only flows in that direction comments synced bidirectionally and finally the background processes that run and enable this we actually used the background process module in Drupal that spawns Apache threads that can take care of background processes it's got its own keep alive stuff but we found that sometimes they didn't keep alive for whatever reason the AMQP lib is great in PHP but it's not the most robust implementation of AMQP so there were some issues there and then sometimes there's just the admins on their side rebooted the server well those were running as Apache threads and now they're not anymore so we have a cron process that goes and just checks the health of all that stuff every so often rather than starting the services we actually changed it so that people would set a flag whether they intended it to be running or not and if it wasn't and it was intended to be running it would restart them there's a whole UI in the site for looking at the status of this how many nodes need to be synced over what the status of these processes are and we wrote this actually for ourselves while we were debugging it but made it pretty for them so that from Drush they could run something and get a quick heads up on what the status of everything was so the result of all this we launched last September at the end of September to praise pretty much unanimously across the organization on day one I guess at the end of day one the VP of communications sent out something that everybody on the team not just our team but the internal team at the company that this was a home run it was really the first time that they were able to garner that consensus across the organization and have that unified communications platform that meant so much to them it resulted in a greatly simplified experience for all of the employees to get and see what they needed to do and not just for HR stuff but figuring out what people across the organization were up to there's a lot of things that they should be proud of that they're doing across the organization and there was really no way to communicate that to people before but they also recognized because the email that came shortly after that one was let's look forward to 2.0 it's only the first step they've got a long list of enhancements and now that they've seen what this platform can do for them and how easy it can be to manage this content with Drupal the sky's the limit and we are continuing to work with them on really figuring out what that vision looks like and making it a reality for them as well so that's about all I've got there's a lot of stuff that we're not able to touch upon and a lot of the things that we're very lightly touched upon because it's just such a large project if you guys have any questions I'd love to answer them here and if you think of anything down the line feel free to find one of us or send us a tweet or an email or come look for us and this isn't really the place to come to find a job maybe but if people are looking we are always looking to and we're always working on cool, difficult projects like that so thank you there we go that's better so you guys are talking about using memcache did you do an announcement on whether you wanted to use memcache or redis or did you consider redis we did so this is one of those mega corporation things so we had already had their security team go through and vet and approve memcache and we went back you know what I think for this instance redis is really picking up speed would like to use it and try it instead it should be just a swap in for it I don't know if we can ask him to go back and do that again phase two not the company phase two of the project when you folks mentioned that during the sync process you couldn't use a post request with UUIDs and I was just wondering why that is sure it's because the post method was used to create content with UUID in services the put method is used to create an update content and you didn't necessarily know what the status of that content was on the other site so it was just easier to basically say update and if it's there and it updates and if it's not it will create it so that was really sure just a quick question on the views UI that says something like use slave server if possible I've always wanted to check that box and have a situation where I could check it but I've never run into it yet so I was just wondering if that's a box that you were able to check and use on some of the views we don't actually endorse even for my sequel slavery so we actually didn't use that but now you're making me really curious I'm going to have to go click through and find that sorry that's not the answer you were looking for but no we didn't implement it with this particular build you mentioned with SAML 2.0 Windows Auth and then once logged in there connecting into Drupal with SAML any thoughts on I mean the rest of it all makes sense it's doable I didn't really go into much detail on that we used simple SAML PHP which is a separate piece of software that runs and handles those handshakes with the ADFS or SAML server the IDP you set it up completely outside of Drupal and establish the trust key exchange and all of that and once that's working you can test that all the attributes come through and the authentication works there's actually a really great Drupal module called simple SAML PHP auth that's like it's an interface to that so you point it at the local installation of simple SAML PHP and then you can tell it override the Drupal log in form with this only allow people to log in with that or only allow certain user classes to log in with their actual Drupal credentials we ended up leaving the user form so that super admins could log in or somebody could get in if for whatever reason ADFS was down and then we had a check on hook in it that basically said hey are you authenticated if not redirect to this URL that bounces you off to simple SAML log me in and then there was some custom code once it comes back and it's got that token and all the attributes from it populates the Drupal user objects with all the new stuff and it's the same whether the person is logging in for the first time and doesn't have an account or the third time or the hundredth time it's just all transparent to them cool thank you sure no problem hi good afternoon you'd mentioned that you're leveraging some of the performance from the the patchy loosing or the patchy solar implementation would I get the same be able to leverage that from an alternate solution like Google search appliance so Google search appliance to the best of my knowledge can't be integrated with search API Google search appliance is kind of this thing that you just it's separate you set it up and you point it at your site and there are some integration modules that'll more tightly integrate what comes back from Google search appliance into your site I know you can get JSON and parse it out and do all kinds of stuff like that but it's really more of something that Google search appliance goes out and crawls your site and creates an index and then it's searchable the way that Google intends it to be search API has a few back end plugins there's solar you can just use a database to do it there's a few others but you can't integrate Google search appliance with your views it's a big thing okay that's solar is great Google search appliance is good for really big companies who just want to search I'm going to get myself in trouble I'll stop now I didn't do it this isn't the music playing yeah yep absolutely so the question was this gentleman also works in government and just question about migration of data and how you deal with that not being able to bring down proprietary content in the migration data to test with so as Ted mentioned we we did a lot of the early development locally with migrate classes that we set up we set them up initially to help us bootstrap the site with content but then they were leveraged later on to help the client migrate their real data into the site so we created representative content locally so at least on our personal environments we didn't have for most of the time their actual content they did provide us the only way that we're able to connect to their servers we didn't just have to use VPN in their credentials we had to use their hardware so they provided us all with these wonderfully blazing fast on being facetious laptops that we had to use to do that and it was okay to bring their content down onto those they actually allowed us to install VMware or virtual box on those that we could run local servers on those machines and not have to be in Windows so we were able to do it with those but other than that yeah it was migration of representative content instead of their actual content and yeah we heavily leveraged the migrate module with custom classes to create content from CSVs, XML however they could export that data from their legacy systems alright well that looks like it again if any of you guys have more questions feel free to come up and talk to us if you were just microphone shy or contact us afterwards thank you