 This is a presentation entitled Delivering on Specifications, should actually probably be closer to delivering on requirements, because it actually took us some effort to get to specification. So brief introductions, so I work with a nice group of folks called Consensus, and we're mostly Drupal 8 developers and programmers and sysadmins who specialize in A-Gear, so those of you who aren't familiar with it. We're a Drupal hosting system that's been around for like ten years. So it's a nice stable system. We're going to look at it very briefly. If you have any questions about it afterwards, feel free to practice down. And we focus on social enterprises, nonprofits, and the public sector, so we work a lot with various groups. Health Canada being sort of one of our big ones now. We've worked with different international NGOs and so forth. We're actually a lot of these kind of predate consensus. It's a bunch of our individual consultancies to form consensus. So some of these do predate the company itself. That said, what I want to look at today is sort of our general agile approach. I'm going to run through this stuff fairly quickly, because most of it's probably fairly familiar. And then get into a particularly demanding project and look at that in a little bit more depth to show the challenges that we were facing in terms of actually being able to, why we needed to prove that we had delivered these specifications and how we needed to do it in a particular way that satisfied the requirements in this particular case. And we'll look at the solutions and some of the tools. And I want to leave enough time at the end that we can circle back and touch on any of the other topics if you're interested in seeing more about it. So agile has been around for a long time. If we just look at the happy path that we have here, we're going to see stuff that should be pretty familiar, kind of focus on business value. That's actually one of the things that was kind of challenging in this particular project. And then our development process really is intended to both build up the capacity within the organization that we're going to deliver this solution to and make sure that it's something that they'll be able to maintain in the long run. So a lot of that involves both training, coaching, and documentation that's going to be sufficient to allow them to take ownership of it and manage it from that point forward. So generally what we want to do is we want to analyze the requirements and come up with something along the lines of user stories. This should be familiar. As a role, I want to do a task in order to accomplish something. And so what this does is generally gives you the context that you need to understand what you're trying to do when you're implementing a feature. If people just describe a feature as I need X field, it isn't necessarily clear what field type it ought to be or how to do the in-line documentation to be informative to the audience that's actually going to be using it. And so this format is really helpful when we look at requirements. And so we're going to see this a little bit later on, come back a little later. We then take those requirements as defined and you'll see that at the top of this we have that in order to realize the name business value as an actor in the system. I want to gain some beneficial outcome. That's sort of the general gist of it. This is using Bayhat, which implements a domain-specific language referred to as gherkin generally, that allows you to describe in human language the behavior of the system is meant to exhibit. So it's a form of automated testing that is very close to the end user and is really helpful when you're working with end users because it's something they can comprehend. There's no code involved. It's not talking about classes and methods and assert that or anything of that nature. It really is in human language. It can be translated fairly easily and it can be extended to be more specific to the domain that you're looking for. So if you do a lot of JavaScript-y stuff then you can create some of these things that might target specific behaviors in JavaScript, for example. It can also be extended fairly easily with things like X queries. So if you have a button, for example, that you want somebody to press but in the framework that you have, it doesn't have a label or a title or the things that are natively supported by the extensions you can target things more specifically. Now, you generally don't want to do that but it does give you the flexibility to work with whatever you're kind of handed especially if there's legacy code bases where you don't necessarily control all of the components that you're trying to test a workflow for. So this is what we try to target in terms of the specifications. We sit down and work through what do they actually want the system to exhibit in terms of behavior. We also like this because if we want to change the implementation behind it like say moving from a Drupal 7 to Drupal 8, if they have an existing system we can wrap this kind of testing around the behavior of the existing system and it simplifies rebuilding that in Drupal 8. And then if we alter the behavior in Drupal 8 it's in human language that we can put in front of a project manager or business analyst, whoever it is, decision maker and say this is how the process has changed and they'll understand it. Now it goes into a lot of detail at this level usually but it's worthwhile to go into that sort of thing. So agile, this should be familiar for everybody, right? So in those specifications we end up with a backlog which is just a long list of all the features we want to implement and then we start taking chunks of those, the highest priority ones and we work on them for one to three weeks generally and we then do a daily stand-up to just keep on board of what we're doing, right? Keep everybody aware. So we had a couple of people working on this project, myself, one of my colleagues and then some internal resources and so we would just regularly touch base what have I done since we last talked, what am I going to do between now and the next time we're going to talk and what challenges am I facing or what am I blocked with and who can help me out with that, right? And then the output is ideally a functional release, some functionality that within the one to three week time frame you come out with something that can be used, right? And then you just start that process over until you've run out of things that are high enough value for the client to continue to work, to pay for essentially. The way that you generally operate within a given iteration is you have some number of tickets or tasks that you want to accomplish, you set that at the beginning and then you keep that fixed throughout that time frame so that your priorities aren't shifting within a given time frame and you're not doing a lot of context switching or putting something in cold storage and then having to come back to it three weeks later to figure out what your thought process was. And so one of the ways we do that is with burn down charts so that you know we've got a date three weeks hence and we've got 15 things we need to do and so we should be accomplishing five of those per week on average, right? And that guideline is kind of the ideal, you know, you're never actually going to achieve it and it's very common that you see that hump that's over top of it because a lot of times while you're working on a feature you're not able to close off that ticket but then a bunch of things kind of come together and then it's a fairly steep curve towards the end, right? To come to get things finished up. Prioritizing things with something that we just kind of covered briefly, right? Like that we should be working on the highest priority items that we can finish within a given sprint. A lot of times people have a hard time prioritizing. Everything seems important, everything seems urgent and it's hard to kind of differentiate and so this is a useful tool because priority is not something that happens on its own. It's not standalone. It's always only ever relative between different items and so this is a useful thing to put up. It's called the Eisenhower Matrix for Dwight D. Eisenhower, the former president and you can put this up on say whiteboard with sticky notes and you write up your various features that you're looking at and you can position them on here relative to one another, right? And then if you space it out after you've put them there you'll see where things stand relative to one another and the things that are both important and urgent are obviously the things that actually should float to the top and then you can kind of go through and put them in generally this order because a backlog tends to be one dimensional, right? Like it's just got high priority, low priority, right? There's not any more flexibility generally that you have once it's in that order. So this is a useful tool for that and then this is just a general maxim of project management that you've got three variables and you can only ever control two of them, right? You can control any two of the three but at the end of the day something's going to slip if one of them is going out of whack, right? So you've got your cost, your schedule and your scope. If you run out of money you're going to have to either do it slower maybe wait until you've got more money or you're going to have to reduce the scope because you're just not going to be able to accomplish as much, et cetera, right? Alternatively, if you try to keep all the rest of them in line you're going to, the quality is what's going to sell, right? And so that's basically another thing that's useful to make sure that your clients understand or the organization understands because there's no simple trade-offs for this kind of stuff. So the project in particular that I want to touch on that is ongoing but the majority of it has been built and has been successfully launched earlier in the year. And it was a community of practice for the settlement sector. Now a community of practice, probably everybody's familiar with Drupal.org that's actually a really good example of a community of practice. It's a bunch of people who share a skill set and a passion that come together to share that knowledge to look for knowledge in a particular place and it has to do with like practical methods for accomplishing what you're trying to do. In this case we're doing this for the settlement sector. The settlement sector for those of you that aren't familiar with it is basically about 500 agencies spread across the country that help to settle newcomers to the country. So immigrants and refugees and so forth helping them to get their kids registered for school or letting people know from warm countries that well you have to dress properly in the winter because you'll die, right? I mean this is something that's foreign to a lot of people and there's these agencies that are generally nonprofits that are kind of spread out all over the place to do this. Those are independent NGOs that are largely financed by IRCC which is the government agency behind it but they have some reporting but they're really independently matched so there's not a lot of sort of hierarchy there. There's spread out and there's a lot of people who are in their late 50s, early 60s looking to retire and the idea behind the settlement sector of CLP here was for them to be able to share their knowledge and wisdom with new generations that are coming in so that they wouldn't lose this knowledge base and experience base as people started to retire the baby boomer generation and stuff that had kind of built it up over in the 70s and 80s and into the 90s. So this is what they thought they needed in terms of services and so when we were looking at the RFP for this we were like, yeah we can do all of those things that's not a problem, right? We have a skill set, we've been doing this stuff for a long time we're all fairly senior, we can pick up and do all of these things. However, there were some obvious challenges and these should probably not be unfamiliar to anybody who's ever worked in this kind of thing from either side. We had a short firm timeframe. Now the funding, and I'll go into this a little bit more in a little bit, was coming from IRCC and anything that's government funded has this like a certain amount has to be paid by the end of March the fiscal year end for the government of Canada and that was one of the things that there was just a drop-dead date if we couldn't show something at that date they weren't going to get funding for the next year and the whole program would fall apart. So there wasn't anything we could do to alter the timeframe. Unfortunately the requirements, and we're going to get into this a little bit more too, were very vague and I'll explain why they were vague but that was just a reality we had to face. They weren't really in the requirements format that we were talking about or anything that was even really all that useful in a lot of ways. And the other thing was because this was being funded externally there was a fixed budget. We couldn't go back to the trough and say, can we get more for this work? At least not within the budget year, right? And that was part of the negotiation that we ended up talking through with them about how to straddle the budget years and figure out ways to shift things between line items and so forth. So we figured, short timeframe, no problem we're all experienced, we're used to working under pressure we can handle that. Vague requirements, we figured, well we'll just work it out as we go along we've got some plans, we've done this kind of thing before we've worked through with people how to get more specific about the requirements so that it's easier for us to understand what it is that they actually want us to be built. The fixed budget, well we've been talking with them about it we figured we could work around it we understood what the budget was and what we could accomplish within it and so we decided to go forward with it bear in mind that this is our first project as a company we were like one month old, we were like, you know what, we've got some big things down the road that we know are coming but it could be six months so let's pick up this project and run with it and see what we could do and so we went forward with it despite all of these kind of warning signs being in place. So one of the ways we tried to accomplish this was in the timeframe we were looking for a way to accelerate what we were doing so get to a point where we could then start iterating on it a little bit faster and one of the things we looked at was group of distributions to get a set of functionality in place pretty much right away and then be able to build on that and have something that we could put in front of clients and start getting feedback from them and so forth and that'll come back as a theme a little bit later. Here's an example of some of the specs we had about eight pages like this and just to read off a couple the COP will provide a platform for the settlement sector that's not particularly helpful right like it's a platform, okay we've got some stuff about a platform that's sort of pure connection and communication okay so it's going to need some kind of social functionality, messaging we didn't really know exactly what they wanted they couldn't elaborate on it it'll be a platform for network learning and a platform for learning circles and we weren't really sure what that really meant like what the difference was between these things and how to implement something that did this so when we started looking through this we were like okay this is going to be a bit of a problem but we think we have a solution we'll do a discovery phase this is something we've done in the past we sit down with clients and we work through getting more specific we try to understand their specific domain and then we put it into the requirements format that we have and then they can say yes that's what we want the system to do that gives us clear guidance about what we want to build or what we need to build except they didn't think that they could do that the reason being that they felt that the requirements they had which they had spent probably 18 months building should be sufficient and because they had spent 18 months out of a three year project getting to that point they didn't want to go back to square one and said no you just have to start building it right away because we have to deliver something in six months from it and in fact they had to deliver something three months before that so three months from when we were talking with them they needed to deliver something that was this launch phase so if I go back here they've got this beta pilot MVP and then this launch now initially what they wanted to do was build the pilot in WordPress and then re-implement it so build a pilot in WordPress in three months and then re-implement that in Drupal in three months later so we convinced them not to do that but we weren't able to get them to come up with it to actually go through the discovery phase with us so considering we had a fixed budget really all we could do was kind of max it out we were very open with them they were open with us about what they had available and what their constraints were bear in mind the budget they were getting also involved hiring people to like animate the community to translate content there was a bunch of things that went into this only a relatively small sliver in my opinion far too small of a sliver considering it's a technology platform was going to the development phase we didn't have to provide a fixed bid because they needed to have some predictability that it wasn't going to be an open-ended budget they didn't have the money for it so we were facing a situation where we had a fixed bid and vague specs and so the solution that we came up with which was sub-optimal and risky for us was in our presentation in our proposal we actually built up a complete mock-up of the system now this was just screenshots and stuff I'll show an example of it and in this document which ended up being about 80 pages long so this was one this was made up of about two-thirds of our proposal I started out by basically just taking all of the requirements as they had them and tried to figure out what we could do to make that happen and one of the things that we'll see in a little bit as well is that we said okay they need a community of practice they don't really know what that means we don't have any particular experience building a community of practice so we went with the idea of well let's look at an existing community of practice and essentially just clone that and by cloning that we didn't have to think through what are the proper ways of doing this we could just say this is best practices let's implement that and then let's look afterwards at where the gaps are between what this could do and what might be specific to the settlement side and so based on that we were able to mock up all the different components and how components would fit together and then in the proposal we also then referenced the codes for the different components as we went through to illustrate that this behavior ought to satisfy this requirement and we just basically went through all the requirements and then highlighted the ones that just didn't have a technical aspect to it but anyway the requirements coming back to that part of the reason that they were so vague by a committee and we'll look at that in a minute as well but this is a large committee made up of representatives from these agencies in each province so I don't know if you've ever tried to work with a large committee and come to some conclusion when you have a blue skies initiative where it's like what are you looking for is a really hard decision to make as a committee and so that's why they had all of these things and there was a lot of overlap, there was a lot of vagueness because they didn't really know what that looked like because this was coming from government there was a third party evaluator that would come in and look at the requirements they had provided and evaluate whether the system was delivering those requirements and this was a third party person who was part of that committee but was not somebody that was going to be engaged on a day-to-day basis with us like trying to elucidate what these requirements were and to make matters worse there were three previous settlement sector COP initiatives that had been tried over the last 20 years all of which had failed and I say three plus because one of them was like a wiki and there was an attempt to do something social around it but it didn't work and so there was a lot of trepidation on the part of the sector participants and stakeholders because this thing had been tried before and had not succeeded and so that's why so much of the project budget was dedicated to people animating the community and doing translation and content and stuff as opposed to just looking at it from a technical standpoint so the design by committee problem is one where if you don't give them something specific to make to comment on conversations can go all over the place and so what we wanted to do was provide them a quick prototype that we could then put in front of them and say how does this feel to you as far as the community aspects the communications aspects do we need to focus on the one to one messaging or is it more conversations around topics and that sort of thing so that was one of the approaches we took the third party evaluation we didn't know what that was going to look like initially so we kind of just figured alright well we'll tackle that when we get to it and we'll see how we did that later on and we actually kind of blew the guy away he was really happy with what we provided and so this is kind of why things ended up the way they were now it's reasonable in my opinion that they ended up the way they were like it's foreseeable even but it's just kind of unfortunate that it's significantly sub optimal so the federal government was the agency IRCC was the one funding this but they knew that it couldn't be a top-down thing they were not going to be in a position to dictate to 500 independent agencies you must use this new system that we put in place and so what they did was they formed this national advisory council which kind of built up from sort of a grassroots thing in each province the leading organizations would send representatives and those were the guys that came up with those those specifications right that was that big council they understood that they wouldn't be able to be regularly active in the project they wouldn't be these were all like the executive directors of these organizations right or deputy executive directors in some cases and so they weren't going to have a day-to-day engagement in the project that was just not going to be something that they could provide so they formed a project management team where they had a couple of designated people from those some of those leading organizations that would participate more regularly and sort of try to funnel information back and forth and so forth and then there was the operations team who were the people who were either providing content doing translation working with us on the design aspects in fact we have a couple of members here or at least one Miyuki here is one of the themers that came onto the project and so if we looked at who we had access to there was no opportunity for us ever to speak to the IRCC that was just off limits in fact almost nobody was ever speaking to them directly they had a representative at the national advisory council well we didn't really have access to them here certainly not directly we had access to a couple of members mostly ones that were in the project management team plus that external evaluator but we only had access to him once so that was unfortunate the project management team themselves were incredibly busy just working the politics of all of this right it took about six months for example to come up with a logo design this is a you know and then the logo design was central to a lot of the theming so there was these extended delays on some of these things and the project management team had their hands full doing just trying to wrangle that right and so essentially we ended up having the operations team that we could work with on a regular day-to-day basis and unfortunately they're not empowered to make changes to the bigger picture policy of how this thing is being put together so this is not uncommon and actually from what I understand this was our first project that had this particular structure where it was federal funding that was going through a non-profit that was then being contracted out to us in this case but having spoken to other people it seems that this is actually a very common way that these things are structured and so the challenges we faced are not uncommon they were however new to us that it was this far down the road of like trying to wrangle a lot of things, hurt a bunch of cats so here's some of the tools that we used and I'm just going to go through sort of how they get into this context so I mentioned most of these are probably familiar to those of you who have been working in the Drupal community for a while and I'm just going to go through them and show sort of how they fit in so OpenSocial is a Drupal distribution with social features so we didn't want to have to figure out how to implement something like following a user or following a piece of content OpenSocial provided all of that out of the box and so we were able to take that from there. There's also the concept of groups private public groups things of that nature that was stuff that we figured we would need in implementing what we were looking for and so this gave us something that we could also spin up within a matter of days and start putting in front of people and saying hey look this is what the baseline is we're going to iterate on this and so that gave them a significant sense that this wasn't going to be just a crash and burn failure as some of the previous element projects had been. At the very least they had something that was a social platform right out of the gate we then were using Bayhat Bayhat is the behavior driven testing framework so that was all the part about when I am on the front page and I click login I should see a username field and I should see a password field it's that sort of framework and so this is what it works out to be we built that into our CI system and so every time we push to commit up to our GitLab instance it would run all these tests and that just grew as we added more and more functionality and we were able to confirm that obviously big complex systems like this when you make a change in one place and have unintended side effects elsewhere this allowed us to confirm that we weren't breaking anything as we were moving along and we needed to move at a fairly quick pace and so having to go back and manually retest things was just not in the cards so this was a great way for us to do that now we had worked with all of these tools previously so we were able to just copy-paste a bunch of stuff in it's a little bit harder sometimes to get started on it and if you're not familiar with it but it's worthwhile spending the time to do so we used Lando as a local development tool kit this was the first time we used Lando for this we are sort of Ager experts so we do a lot of stuff with hosting environments with Ager and then we run that largely in local VMs and it gives us the ability to do things like clone a site and then mess around with it and do things like that in this case we wanted to try something a little bit more a little bit more flexible for the use cases that we were going to be looking at and automating some of the build stages and also being able to put this in the hands of that operations team so that they could build it themselves and run it themselves and we didn't feel that we needed to give them like a course in how to operate Ager which is really like a production grade hosting system in order for them to do some theme development and so forth so that's kind of where Lando came into play we'd been working we knew Mike Pierog and some of these guys who developed it over the years and previous generations of it so we had some fair confidence that it was going to help us do what we wanted to do and it did and so basically it just builds a bunch of Docker containers one for the database one for the web app and we would run it and be able to just use it for that it also has some nice features in terms of having like a router that will render a domain for you and stuff like that so you don't have to go and hack your Etsy host file and things like that we did use Ager we used Ager for the hosting system so this was everything from staging through production at very least we would put on here we actually had testing as well because a lot of users were not going to be running a local version so we would regularly for every release build a test system they would go in validate some of the features that had been implemented properly or the bugs had been fixed and then we would merge that into the mainline branch and we had an ongoing rolling beta site as well so we had a beta site that was like the current the current release and then we had a testing site that was the the upcoming release so that was how we ran that and Ager provided everything we needed for that we have lost focus here, there we go this is what Ager looks like it gives you a lot of flexibility around managing your sites, it's got a lot of best practices built into it it does things like for example alert us when there's security updates that are required on site and so forth so since we did have a production end end point to this workflow this provided a ton of flexibility around that and there's a lot of git based stuff that's been built into it over the past few years composer and so forth that allows us to very quickly and easily build up new platforms which is just Ager speak for codebase and then install multiple sites on that whether it be our testing or beta sites and then be able to upgrade our sites between them and so forth obviously we used git, not going to go into git very much this was important because we were using feature branches since there were multiple of us working on disparate parts of the system at once we didn't want to be stepping on each other's toes and so we used git for that we also within the system OpenSocial is built on features capital F features and so we used that to a large extent to export our config that is not the best practice anymore and we've run into some of the challenges with that especially around like config translation and sort of weird edge cases that we run into with this stuff there are tools now for that part of it so I think config actions is one of the sort of the new hotness on that one but anyway, git was used extensively obviously with feature branches and merges and all that stuff we used gitlab mostly for issue tracking and can band boards we had we did some merge requests but since it was just two of us we could coordinate that fairly easily and there wasn't really a need too much for going into a lot of depth there that said the can band boards so this is what it actually looked like for us right now in that initial one you had sort of a nice straight line going down and then your actuality kind of like tracked it a little bit what we saw in reviewing some of this stuff was that we started out with like in this case four issues right those four issues were those vague requirements kind of things and it was actually like 18 different things that were in those and so what happened was initially we started out with those and then we broke them out into separate tickets right and then we worked those so as we went along we got a little better still had things that we needed to break out but it's starting to look a little a little bit more like a coherent line and then this happened now what happened here is that we implemented a gait system where we would put things into a state status instead of just closing in where it was for review and they just got stuck there this was just a process thing the project management team wasn't able to proactively go through that on a regular basis and so we were completing tasks and then they were just piling up in this queue and not being closed and then eventually we just said well this is kind of ridiculous we need to sit down with you for a day and just go through all of these things and that's when it just dropped we just closed everything because we were able to demonstrate that it worked and then finally towards the end of the latter part of the initial phase of the bill it started to look like a regular canvan not a board but a burn down chart where we had at that point we understood the number that we had the level of granularity in the tickets and we were able to just work through it and get it all done so from a documentation standpoint I am a huge fan of Hugo and I would highly recommend that you take a look at it if you haven't already it's a static site generator and so what it allows us to do is put a docs directory into the into the code base right next to the actual code that's running and have all of our operations docs in there for example how do you deploy a new code base how do you upgrade the site how do you revert features how do you theme things all those kind of things that we wanted to be able to hand off to the clients about how to operate this big complex system we put in the code itself so that when we were looking at closing a ticket that implemented some new feature if there was documentation required we would look to see if the documentation had been updated before we would merge that branch right so Hugo was great for that and this is where I'm going to start looking at this a little bit more in with a demo so this is also how we managed those requirements and in turn how we were able to show that the requirements were satisfied ok so Hugo was a big part of that and so we took the requirements that were in a word doc right and we essentially moved them into into Hugo now just show you how Hugo is written it's actually just marked down and let me show you the raw version of this bear with me maybe I can make it a little bigger ok so it's marked down and in this case it's building out a table and then we have these what are called short codes and these are custom short codes sorry these are custom short codes that would basically just translate this requirement code into for example a link to something else and so I'll go back to this to illustrate how that worked so one of the first things we did was we made it so that each of these things were an anchor right fairly straightforward but it allowed us to go back through the get lab ticket and every time there was one of these codes we would point it to this anchor right for that particular code and that allowed us to cross reference the issues we were working on with the requirements document we then did the reverse we put a link to the issues that we were working on that were tagged with that code and we went through all the issues that we had and we tagged them or labeled them with the particular thing that we were working on or the requirement that we were working on now we didn't actually use labels for this because that would have gotten out of hand what we did was this was largely copy paste from that 80 page mock up document and we broke that down and then we went in and put in those links right so this is the one that goes back so the fact that there's a link to this anchor is what then makes it feasible for us to link back into the get lab issues that have that reference that particular requirement this is the first thing we did in order for us to be able to easily go back and forth between what are the features we're working on and building and how does that fit into the broader picture of the requirements overall once we were able to sit down with the the evaluator we started looking through this and saying well how can we connect this with what we had in the system right so we have this it's linked to the issues the issues are getting closed but it doesn't yet illustrate that the behavior is in the system itself and we didn't particularly want to spend a week sitting down with them going through each of these things and showing so what we started to do was we started to tag our tests with the requirement the requirements so you'll see in the upper right hand this is maybe a little small let me zoom in on this so you'll see that this particular feature actually satisfies a bunch of the requirements that we had and so by tagging it this way we were able to show this feature does this and this and this and this and this as far as these requirements go and this all links back through that mock up document and how that fit together now what we did have at that point was is this particular feature implemented yet right or for this particular not feature this is this particular requirement or this particular specification in there and so that's where this came in so what we did was we put together some some scripts that would essentially run Bayhat against each one of these tags right so with Bayhat when you have these tags that we were just looking at there what what it will do is you can say just run me the tags for BR031 right so any of the other tests that are tagged with that will get run and only those that have those will get run and so what we did was we automated the process of scanning through all of these requirements tags and running the entire test suite against each one individually and so for each requirement we then had all of the tests that satisfied that component so this is all the behavior that we figured actually satisfied that particular requirement now problem with big requirements is that some things are very big and some things are very small and so in some cases we have a lot of behavior they might have one line on they had one requirement for search right but they had a lot of expectations of what was going to come out of search right and so we ended up having some really extensive tests around some of that but essentially this is what we would do we would go through and we would build out this report now initially I was thinking we would do this as part of our build but we backed off of that because it took about two and a half hours every time we would run this because it had to reboot strap bay hat for every one of about 450 requirements and so this would be something that I would just at the end of the day I would run it generate page and then I would just commit that page and we would go from there what this also then allowed us to do was to go through and see some of the ones that didn't yet have any tests associated with it and we would then as we were closing in on the on the production date where we had done all the high priority things we were able to go through and say oh you know we haven't got this part about privacy laws so I think we need a photo page with privacy a privacy policy right and we went that's this is just an example we actually had that from long before but it allowed us to identify the things that where there were gaps and a lot of the gaps were just that we hadn't tagged a particular test that is we had behavior that satisfied but we hadn't made the crowd the link between them and then we also came across once we had filtered through all of those we realized that there's a bunch of requirements that were left that just weren't functional that we couldn't test such as it will be secure the system will be secure now how do you test that it's going to be secure you can do a ton of like you know penetration testing stuff like that but that was not part of the scope of what we were talking about here so what we did was we created a platform benefits page that basically again copy pasting from the original proposal where we were addressing these things and we just describe how between Drupal and a gear and our update process and flexibility and speed with which we could do it would satisfy these requirements and so there was a this was just a narrative component to it when there wasn't a technical a technical test we could do for some of these things turns out it wasn't all that much that was left over in those components and then there were a few things where I don't have an example of it right here but there were some that were just like not not relevant at all right to us it was things like how some of these agencies who were already running their COPs could move their stuff from their COP into this other one but that was like a policy level thing it wasn't something that we were able to do much with and so some of those are just flagged as this is you know out of scope basically and so that's basically how Hugo fit into this and then the way that we automated the the generation of these projects these things as well as automating a lot of the development processes was using Canoe Make now if you're not familiar with Canoe Make generally this is used for compiling things like the Linux kernel right you'll use GCC and stuff to actually do the compiling but Canoe Make sort of says okay well I need to compile these libraries before I compile this other thing this executable or whatever so it's a dependency manager actually which is really nifty and it doesn't have to be something that's compiling in this case a lot of it was used to compile the pages that we were generating but it also allowed us to basically any time we had a process that had more than a couple of lines where it might be like lando destroy or whatever and lando build like we want to just rebuild everything we'll just create a Make target for it now Make is essentially it's essentially bash on steroids okay it's got bash with dependencies in it so this is an example of the piece where we would build out that requirements page and what we had done was we'd split them into or actually they were delivered to us in different pillars okay so just different sort of broad categories in which the requirements would fit and so we split those out into separate pages with all of them in place and we had a little bash script that we would call that would iterate through it and run in one case let's see I think it's in here no it's actually this requirements.sh actually just calls lando bayhat tag so it would reach into lando run bayhat inside of lando with that tag and it would just iterate over that to generate all of those pages and then this would concatenate all of that into these sections and then it would build that into a single page that had the entire report and so what essentially going to make a lot is for us to say make rex so this on page line 8 here for example this reqs for requirements and it would basically just what this does is it says in order to make rex I need to do umbrella, piliform, one piliform two etc etc and that's just how you define dependencies and so instead of saying was the umbrella one already built can you make handles that for us so if it gets part of the way through and then it fails you can just re-run it and it'll identify what's already been done properly and it'll just pick up where you left off those are things that are really hard to do in bash if you're doing it manually so if you do make was a great help in doing this and I've been using it for a lot of other stuff like whenever I want to install composer locally at a particular version I just have a make composer I use a lot of ansible and devop stuff so I do make ansible so I have the particular version of ansible that I want for that particular project so anybody who's with me is also running on that version and the near make is a great tool for that and I put a bunch of that stuff into a little project I call drum kit that I can then sub module into whatever project I'm working on and I just get all of these tools in place instead of having a bunch of snippets in like gist or whatever this is how I kind of share these things and having a team now that's building out this kind of capability we kind of put everything into that and show that across our projects and iterate on it and it is available on gitlab we just transition from github pages to gitlab and I haven't put together the gitlab pages but that should be up soon and that's at drumk.it so drumk and that's basically the tools that we used to get through all of this stuff and I think that's the end of the presentation yeah so here's how you can contact us are there any questions, comments things that you'd like to see a little bit more about I know I covered a lot to ground did you find you had to write a lot of in a vhat I've worked with that a little bit and was frustrated at how much actually coding you need to do behind the scenes to get the you know the aspect of getting my phone ready? no and the reason being that there's a Drupal extension for it that is fairly well tuned for Drupal 8 so like it'll look for like the title field or the label field or whatever and so the vast majority of it is just out of the box and that depends on mink which is the web framework that provides things like when I am on xpage right? is the Drupal 8 a bigger version of version 7 Drupal 8 is an improvement of over 7 significantly anyway just with being consistent about things like that I found with Drupal 7 I had to do a lot of xquery stuff to target a particular class right and then that becomes very brittle because if you change the implementation now your tests are breaking because you've changed the class structure slightly or whatever right so Drupal 8 itself is much more consistent about how it does that I think Quake has a lot to do with that as well and just the structure of how the theme layer has progressed as well so yeah we did find I mean I can take a look I don't know anything that we did do would be in here so bear with me for a second I'll take a quick look so under features it would be under bootstrap so we did do some stuff with email context because we wanted to make sure that if you subscribe to something somebody updated comment that you would get your email notification and stuff so we have some stuff like that but generally we do have some this is a geological did you actually test receiving sending and receiving an email yes and for that we used I cannot for some reason resize this sorry anyway we can look at it a little more if you want offline but we used mail hog it'll just take all the any mail like I think it listens on whatever the port is I don't remember and it just sucks everything in and provides you a little like Gmail like interface and so you can you know it'll send an email and you'll be able to see it in another tab and that's how we did it and then so we needed to write some pay hat to check that for example the title of the email was correct but there's a mail hog extension that we used that then provided some of those tools and so we added a couple things that were like we want to make sure that it has this pattern and that the content has certain things and that the links work and stuff like that so we extended it a little bit to make sure that we were anytime we ran across the bug essentially we would try to write a test for it and see that it failed and then fix it see that it worked and go from there any other questions it kept going a little bit long so I meant to go through that past and leave more time for questions but I guess it worked out so there you go thank you very much, have a great rest of Jukul camp for Jukulman