 My name is Pradipa Radhya. Thank you for coming. I realize it's New Orleans, and it's the afternoon of the conference, and you guys want to get going. But you should have come only to see the content vampire. No takers. So it's my first time in New Orleans, and every Uber driver I encountered, I said, do you know any vampires? And they didn't. So I said, I guess I got to be one. So here I am. So today we're going to talk about a solution that we created for a very large B2B e-commerce company. This is solution two of two solutions that we created for this company. And we did this, my company, Novus Loris, in collaboration with Srijan, Rahul Devan here, who's their CEO, and Ashish who are there, who run sales for them. Thank you so much for doing this with us. This is a fantastic solution, and I wanted to share with you probably a Drupal showcase that is likely to be a very common problem for most e-commerce companies. I have a demo, which probably will take a little while. So I'm going to rush through the slides. I'm happy to take any questions afterwards. So I won't even talk about the first two kinds of content management problems. If you've been in content management for a while, you definitely know that there's different kinds of content, which is just vanilla pictures and text versus more complex content that has many different things as part of one bigger entity, if you will. Access and agility is something that the CMS is supposed to provide for you. The real complexity that we solved for with the solution, and this solution that I'm going to show you directly, is not applied to e-commerce, but it is applicable. And that's why we wanted to show it to you. We integrated the CMS with e-commerce concerns, if you will. And then because of that, it threw up a whole bunch of issues around stakeholders and what they want to see for themselves because of the solution. And we managed to actually solve those concerns. And that's what I want to show you really quickly here. So again, I won't walk through the more vanilla bits around what content management should allow you to do, but if you look at the last two rows, you're not really in an enterprise CMS unless you potentially have multi-site capabilities. You're using one CMS platform to input and manage content for multiple websites, brands, business units, whatever you have. And with that comes probably a great frequency of publish, a need for agility, a need for making sure that the right people touch the right things, and it's not a free-for-all Texas hold-em kind of thing. And then if you have that and you add to it the business of e-commerce and for this particular client who does probably about almost a billion dollars in e-commerce, both B2B as well as B2C, you have a whole bunch of other stakeholders and a whole bunch of other people who want to control what you're trying to do. And also, because of that e-commerce ability, the need for personalization, the need for variables that are everything from shipping to pricing that are all personalized need to be part of that content, part of the picture that you present on the website or the web experience. And this is the complexity that we really needed to solve for. And apart from the technical challenges, here's the number one issue that we encountered, sadly. So if any of you have integrated against an e-commerce site, you know that the front end isn't necessarily being served by the CMS. It's being served by the likes of a web sphere or whatever. And when you go and talk about this to those technical folks, they're all very polite. They think they're Neo. They say hello and happy to meet you. But the subtext in what they're saying is you touch my code and I'll not allow you to publish anything. And this guy actually came at me in the parking lot later on, but I managed to evade him. No takers on the jokes. Only one person here. This is just so sad. Anyway, but what he's really doing is operating out of fear. The likes of an IBM or some other large company came in and sold him and his boss several million dollars' worth of whatever the heck it was. And then they turned it over. And in about six months, he did something. He couldn't understand what. And they had to call IBM back in for another couple million dollars. And now he's very scared. He knows that if he touches something, it doesn't work. It's a huge dent on him and his boss. So he's just going to police everything. Every little bit of code that goes up on this billion dollar website is up for diligence. And this was the problem. This contention between marketing needing to publish in an agile manner to be able to sort of drive business versus these folks who just wanted to make sure that the website didn't crash was a huge problem. And our first attempt to solve this problem did not quite take because there were too many stakeholders. There were too many people who wanted a sort of engineer solution. But the second solution is fantastic. And that's what I want to share with you today. So again, to very quickly go through the different kinds of stakeholders, you have the business folks who want to be very agile, who want to publish new experiences, keep the content fresh. They also want to track what's going on with their content. And at the other end, you have e-commerce and CMS developers who are very worried about code, who want to review every single change that comes out. And in between, you have everyone from designers who want to control styling, to authors who want to input content. And somewhere in the middle, when you throw personalization in, what happens is you have a bunch of variables bringing in shipping information, pricing information, customer data, which has to be part of that content page. And now you have an unholy mess. If someone wants to create a new page, then it goes through six months of design, and diligence, and QA, and whatnot, and you have a bottleneck. So how is a billion-dollar company expected to be agile with its website? So this particular company, I'll tell you, takes 11 weeks to publish a single banner on their homepage. Finally, somebody thinks there's a good thing going on here, reactions from the audience. They have 56 reviewers for that single banner. Now granted, there's a supply chain behind this whole thing, and it's kind of interesting that way. But there's got to be a solution. That's what we were trying to evolve. And when you really look at the stakeholders down at the page level, here's what it boils down to. Business wants to say, I want to be able to create these pages in whatever layout I want quickly and publish them because I want to react in real time to opportunities in business to whatever indications my customers and market is giving me. They also want to be able to track the performance of these pages in terms of revenue, not in terms of speed, is it served fast enough on the web. The IT folks simply want to say, I control the variables, I control the code. And if you want to change anything, then it has to go through me. Otherwise, I cannot guarantee that your website will work or even perform. In the middle, you have authors and designers who want to control styling code, who want to control layout, who want to be able to say, I'd like to write this sentence which says, dear Mr. So-and-so, and I want the customer's name there. This is a special price for you, and I want that special price surfaced on this web page. So there's a very bad contention for who goes last on this page. And this is a problem that we were trying to solve here. There's only one way to solve this problem. The solution is to clearly segregate the concerns of the various stakeholders, allow them to control only what concerns them and nothing more. So setup code and display code are different. Setup code brings variables and other data in, potentially provide some sort of structure. Display code simply says, this is what I want to show here. So that was the distinction that we used. And by doing that, we separated these concerns. We modularized these concerns. And then we said, for people who don't own the particular models, IT owns some kinds of models. Designers own some kind of modules. And content authors play with those modules. They can position those modules. They can assemble those modules, but those modules are essentially pre-written. So they can't mess with them. This still allows the business user to do some amazing things, which I'm about to show you. So if you wanted to edit content within these pre-structured models, that the business users can do. If they wanted to put variables in there, they can do that as well. And they can move those variables around. I'll show you that in a second. What was also super important to do is when an author does something like this, it's important for the author to be able to preview what they've actually built. Otherwise, it's useless. If they're going to have to wait for three days after QA, once it lands on the e-commerce system to see this thing and say this is what it looks like, it's going to waste a lot of time. So they have to be able to do preview on demand with those dynamic variables on shipping and pricing in there. And that was important to provide as well. Lastly, once these modules are written, you should be able to reassemble new pages and layouts with it relatively quickly. Otherwise, you're going to have a frustrated bunch of people who say, I've got this much, but I can't have anymore. So that's the solution that we created. And I'm going to skip over the demo for just a second to show you an architectural picture. I'm actually going to go to the demo first. So let's do the demo. So I want to give a shout out to two guys from Shrijan, Rahul and Ashish's contributions were stellar, but two young fellows from Shrijan. Umar Zafar and Ritesh Gurung were responsible for this brilliance. And this is their brainchild. I'm going to try and explain it. Keep up if you like. What really happens is it starts on the left of the developer on a CMS creating a bunch of components and layouts and things like that. And then he puts it in the hands of the business user who goes in, picks up a layout, puts components into regions or whatever. And then he says, I'd like to preview this, which basically the CMS takes that entire assembly, posts it out to the GNP, the general notification platform, which sends back a executed preview including variables. So the business user can say, show me this sample data or that sample data within this template. And these guys can receive a preview, which is they can look at the actual page instead of saying, I'm waiting for three days, or I don't know what the data looks like. Once that's done, they can workflow that, send it out to QA, who will potentially pick up all of that HTML, drop it into a litmus-like structure where they can validate that code. Email is, after all, in on needs to be visible on any device. So they'll validate that code and send it back and say, OK, this either passed that validation or did not. And if at that point some component needs to be edited either for content, the business user can do that. If it's got to be edited for code, then the workflow allows you to take it back to the developer. All of them can invoke the same preview and see exactly what's going on. And the APIs connecting the GNP system, the structure that manages all of these components, all of that was courtesy of our partners, Srijan, who are fantastic. So if you've got a problem like this, you need two kinds of people. You need a fashionable guy like me to talk to your CMOs and keep them quiet, while the guys who do the brain work can actually evolve a solution like this. So remember that. And here's how you get in touch with any of us, if you like. Thank you so much. I'll take any questions. More vampires? It doesn't seem to be any questions. I'm hanging out. I know he's got time to get going. So I'll hang out if you've got questions. Thank you. Thanks, Pradeep. Hi, everybody. Let's see if we can do this live. All right, we've got a dongle. We're on our way to success. That's what she said. Matthew, Ben, you want to come up front? They're the people that actually can answer the questions. Why don't you just grab some chairs? So I'm Matt Westgate, CEO of Lullabot. We have Matthew Liviere and Ben Chavay. They're also at Lullabot. Ben was the principal engineer of Tugboat. And Matthew's been using it on a lot of client projects. So let's see if I can get this going. All right, chug a tug at Tugboat. So thanks, everybody. Thanks for coming. What I want to do is just I think there's maybe six slides. So hopefully it won't be too painful. And then we'll do a presentation live. And then if you have any questions, I want to open it up and kind of keep things light and fun. How many people know what a poll request is? Most of us, most of us. OK, so the easy way to describe Tugboat is it's an automatic poll request builder. So when you're in GitHub, it also works with Bitbucket. What it does is it builds out a full working version of your website every time there's a poll request or every time that there's a bunch of code changes that are made, it takes that code, integrate it into the new site, and builds a working version of it. And the idea behind that is that it creates visibility into the development process so everybody can see what's going on rather than having to wait until the end of the sprint cycle. The reason that we started this project in 2013 was because we saw two things happening. One is for stakeholders, they don't have locals installed on their machine. And even if they did, it's because some other developer came, took pity on them, set it up once, and then it broke after the second sprint cycle. So what we saw was a lot of stakeholders that wanted to participate in the development process, like wanted to actually give feedback, but screenshots weren't enough, text descriptions weren't enough. They actually needed to see what the site looked like. The other problem that we were seeing is I don't know how long your sprint cycles are, but we usually do two or four week sprint cycles. Then we have the big show and tell. At the show and tell, the call is usually like two or three hours long. And you go through 10 tickets that were completed during that sprint, clicking around on the demo. You had to pay someone to actually set up the build, and it's just a painful process. In particular, what's painful about it is that it's the first time the stakeholders are seeing the code. And unfortunately, it's oftentimes the first time that developers are actually getting any real feedback on the project. So what happens is rework. You wanna start the next sprint cycle, but the first thing you have to do is stop the big ideas, go back, tweak the tickets that you were working on, and then things get behind. So we thought we could change that. So we thought we could change that by building out actually what the tickets do so people could see the work as it happens and actually share the work as it happens too. Imagine a developer doing a poll request, and then the rest of the team, the technical team, non-technical team, getting notification on their phone, that's the link to the living, breathing version of that site. So that's what we wanted to do at Tugboat. So we wanted just to add visibility to the process and make it more collaborative, make a development feel less isolating. I'm gonna give a demo. So these are some of the clients on Tugboat. We're geared towards medium to large teams, medium to large projects. All right, let's do a demo. Let's do it live. I didn't do the movie. I don't have a movie prepared, so if this doesn't work, we're gonna do an interpretive dance of Tugboat. Did you bring the captain's hat? All right, so this is the first thing you see. This is the Tugboat UI. Let me embiggin. There we go. Is that big enough, larger? Let's try that. The great thing about this is it's responsive too. So most of the time I'm giving the demos on the phone. But the first thing you see when you log in is the repositories that are available to you. And if you click into the Tugboat demo one, this is just a small sample of these. In this case, these are GitHub tickets. It also works with Bitbucket. But these are the pull requests that are available. And by default, what Tugboat does is when there's a pull request, it goes and does a build for you. So why don't we kick off a build here for the homepage redesign? So you can, we'll let that process build. The cool thing is, is actually over here in the GitHub ticket, what you can see is that the bottom is Tugboat has grabbed it and said, hey, I'm about to do something with us. So it's actually letting everybody know that there's a build process happening. If we go ahead and click into this ticket, we can actually see a log of what's going on. So this is a real-time log of the build process happening, complete with awesome ASCII art. Thanks, Ben. For Tugboat, what we use is we use a lean container system. So this is a Drupal site. So at the bottom, you can see two containers there, the Apache container and MySQL container. It looks like the MySQL container's already ready to go. If there was other components, if there was Memcache, if there was Solar, if there was things like that, you would see additional containers here. You know, what we try to do, we're not a self-service SaaS. What we wanna do is when a client comes to us and wants to use Tugboat, we actually talk to them. We make sure that the containers that they want installed work for them. If they have their own Docker images, we'll bring their Docker images in. And then we actually work with them to help them make sure that their build scripts work and that if they're integrating with Jira, they're integrating with other things that we talk about the workflow, make sure that they have a successful project. I talk too much, you missed most of the build. Let me see here. Let me close this. So a couple of the fun things that you can do on this is we don't want this, even the Tugboat experience to feel like a black box. So we have command line access right into your containers through the UI. And it's great. I mean, you can do tab completion, all of that sort of stuff. This is a Drupal site. So it can work with other WordPress, other things. If it runs on Linux and it's a web related, then we can host it. But since this is a Drupal site, we have Drush installed. And so you can get, act all of the containers right through the command line, which is pretty cool. There's another feature here that we rolled out. The cool thing is that this preview button, it actually takes you to the Living Breathing website. But if you don't want to click that or you just want to see what's going on, we actually put some revision, visual regression analysis into the platform too. So what you see on the left is the base preview. And oftentimes, that's pretty close to mirror of production. And then you see what this build does. It's a homepage redesign, so we made it red. And then that image on the far right is an overlay of those two images to highlight in pink what all the visual changes are. Right now we're doing the homepage, but there's configuration settings to do additional URLs on your site if you want to test that out too. But the cool thing is, is at the end of the day, it's just a URL, and it's your whole website, not the one in production, but the one that has the latest change, that you can send that URL out to the entire team, get feedback, and since it's a website, it works on your phone, works on your tablet, and it's easy to get a hold of the stakeholders. That's the demo. It wasn't too bad, huh? Does anybody have any questions about that? Yeah. Is it tough for yours? Yeah. So the question is, what does it do with a solar instance? And probably related to that, like how do you get things in and out of tugboat, like the databases, the files, things like that? Ben, do you want to take that answer, or take that question? Thanks, Greg. So I'll start over. Okay, so that would be part of the initial setup. And it's a lot like setting up your local instance on your laptop. It would just be on the tugboat base preview. And from there, you can pull in your database, your files, copy your solar schema, and that sort of thing. So what Ben is saying, we have some of the UI for it. So the database we usually prefer to pull in over an SSH tunnel, and then from here, you can choose the frequency of updates. We have one client, for instance, that has, what is it, a 50 gigabyte database. And so the initial import takes a while, takes a number of hours to do that import. But the cool thing is, is the way that Ben has architected the platform is to generate those previews, to generate those builds. It's still only minutes to make that happen, because it's doing a diff of the database. So I saw another question in the back. So if you're familiar with how Progo CI works, is this, like how can you kind of contrast like how this works versus how that works? Sure, yeah, I can take it. The question is, is how does Tugboat compare to Probo? And both the tools are similar in that they're trying to do builds for pull requests. I would say with Togo, I have Probo. I haven't used it extensively, but Probo seems more geared towards the consumer market, and more of a self-service kind of platform. I'd say the main difference with Tugboat is it's for larger projects. We offer, our plan starts at 192 gigs of space. So we really want teams to be able to generate a lot of builds and keep those builds around without running out of space. Our platform is more of a custom platform. So rather than just saying, here you go, pay us X amount of dollars a month and here's access to everything, go for it, what we actually do is we help you get your site set up for you. So we'll make sure that the build scripts are right and get things imported for you. And then in terms of features, I'm not sure what all the features are that Probo has, but certainly the command line interface, the visual regression testing, are some of the areas that we have and we're going down. The other thing I should say is that during that custom setup, another thing that we do is we actually provision out the server and give dedicated resources for that. So when we're doing your build, you're getting dedicated CPU, dedicated RAM, dedicated storage. We use Linode in the background. And so if and when more space is needed, then we just upgrade the Linode instance. So a lot of the clients that we work with, they need to have that for performance and for security. They need to have those walled gardens. And then just the last thing I'll add to that is we also do on-premise version of Tugboat. So there's some clients where they've invested a lot of money in their hardware and they're still depreciating those costs over time or even for security reasons they need to be behind their own firewall. So that's an instance that we support too. Does that answer your question? Good question. Yeah, Matt, do you wanna talk about any of that? Have you done any testing on your client projects? So on ours we haven't done testing? The way that your project can interact with Tugboat is by putting a make file in the root of your project. Tugboat looks for that, looks for specific targets there. And during the build it will run whatever you put there. So if you wanna run a SQL sync or your B-hat tests or anything in there, you run whatever you want. And as long as all of that succeeds, Tugboat will run. Yeah, I'll go ahead, Matt. Yeah, I was just saying one of the examples that we have on our project is we don't have all of the files stored in each of the container instances, like all of the image files and that sort of thing. It would be a lot. So we use stagefile proxy module that just will hit the production server up for any image requested, which can be, it makes the initial hit when you go into a Tugboat environment kind of lengthy because it's pulling each individual image and then making an image style. But in your make file script you can do something like, you can preload that homepage so that those images are fresh on the Tugboat instance before. Like an end user even goes into the Tugboat instance. So you can do things like that. You can get very creative with it because it is just a make file batch script. Yeah, and some of the examples that other people use with the build scripts are things like they install a dummy set of users into the database with all the different roles and permissions so that they can test to make sure that all the logins work, to make sure that e-commerce transactions are valid. They do third party integration to make sure that those things work in each of the previews. So they just start to build out these scripts over time. And I put up here some example scripts and some of the documentation for the API. Good question. Yeah, in the back? All right, Sally Young, ladies and gentlemen. Yeah, we actually just got rid of Travis and now we're gonna run our tests on Tugboat. So as Matt mentioned, we use the make file and so we can determine all the steps that go on in there. We run NPM test, which actually goes off and calls a bunch of stuff in our grunt file, but they have a particular order and if anything in that order fails, it won't even get as far as the command that actually builds the project for us. And so Tugboat will not build URL for us, which is great because then it forces everyone to fix their tests before actually getting an environment. I'm curious, anybody in the audience has anybody else done a comparison of Probo and Tugboat? I'd love to hear from the audience if they've done that. Greg? Sure, so we use Probo. I would say one thing that seems different is that with Probo, every time you're getting, like they have a base Docker image that they can potentially add things to, but it's sort of like it is what it is and it's very basic. Whereas Tugboat has defined this concept of like a base image that you rebuild periodically and so that's gonna have like a lot of things already set up in it for a Drupal specific instance, so that seems like it would make the test runs a bit faster. That also supports, happened to be talking with Ben about this last night, it supports the idea of like we have some ansible playbooks that we use for setting up a production environment so we could run those in the base image once and that's gonna take 30 minutes or something, but then the test containers would be exactly the same as what we're running in production, whereas with Probo doing that would take a half an hour with every single test run, so that's not, you know, so we don't do that, we just sort of live with the fact that there are some small differences and we sort of compromise a little bit there. Those are just two of the things I've noticed if anybody else has any. Great, thanks Greg. I should mention that we have a website, tugboat.qa, you can see a little bit more about Tugboat over there. I'll mention pricing too. So the difference between the pricing there, the three initial tiers is the amount of storage and the amount of memory that comes with it. Right now there's feature parity across the board and even the visual regression testing. So we start out at $5.99 a month. The team size there is just an approximate team size, it's any number of developers. The way that it works is the storage that's available is the amount of room that you have for the builds. So if you are working on a team, a large team with not a lot of pull requests or a small database, you're gonna get a lot for your mileage. What we tell clients is to usually start with the $5.99 plan and go from there because we can always upgrade to the $9.99 plan for instance. But if you are a small team and you're doing 30 pull requests a day, then there might be some size constraints that you run into, but tugboat does its best to optimize those things. So when you close a pull request, it goes ahead and removes the build and then you can also do things like stop a build and that sort of stuff. But the bottom line is what we wanted to do is give you a lot of room to make sure that, because we're trying to alleviate the overhead of QA management and QA servers and testing servers. So we wanna make sure you had a lot of room to do the builds that you need. It's a very good question. So what about GitLab integration? We're actually actively working on that. So I'll also mention here, if you wanna give it a try, kick the tires for yourself, click on the demo link at the tugboat website and we've set up a public GitHub repository. So you can go in to that repository, make some edits, make a pull request and you'll see it over here. I'm gonna click this. I have no idea what's here because this is open to the public, but you're gonna see a sandbox site of, I'm gonna authorize GitHub, of all the different builds that are available. So tugboat, tugboat to shrugboat, Bowie Fide. Let's look at that one. So it's turn the faces, yeah. So anyways, you can hack our site as a way to test tugboat. Sally, cool, thanks Sally. And then I should mention like, most of the way that developers interact with tugboat is mostly just through GitHub or Bitbucket itself because tugboat will just post the URL to the build within the ticket. I don't know, Matthew, do you wanna talk a little bit about code development with tugboat or peer review with tugboat, how that works? Sure, so on our project, everything goes through a pull request before and then is peer reviewed as we showed that has like the shell consoles right in the interface. So you can do things like if there's drush updates, drush code changes, you can do those sorts of things right in the browser. You don't have to bother with SSHing into the right container. It can be tricky sometimes to get that set up locally for yourself in order to do a proper test for that. And I think the most valuable thing about tugboat is just, as we've said, is to offer up these environments to other stakeholders. People like the ads group, research group, products, they often don't have visibility into minor changes that are happening day to day with the developers. So this is huge. For somebody like in research, which is looking for their little variable to go off in the beacon, that usually never even gets looked at until it's in a staging environment maybe with a whole bunch of other changes that might be in the way. But with tugboat, you can just offer them the URL that has that one change. You can have a discussion about that over the environment. Thanks Matt. And these are some of the containers that we have available. And we've been adding more containers as clients have requested it. I know we have a client WordPress. So we're adding WordPress CLI and Nginx and some other things to start to support that. Either of you want to talk about Acquia integration or just integration with other hosting platforms. I mean, tugboat itself is hosting agnostic. It's not in our roadmap currently to provide production hosting. That's one of the things our clients, they want to choose where they host and they want to be able to bring their QA platform with them. But I don't know, I know we have some clients that use Acquia. Is that relatively straightforward to integrate with or? Well, yeah. Like he said, tugboat is hosting agnostic. So this doesn't get in your way. It won't like block you from using your current workflow. It's just something that can be off on the side to help you speed up your current workflow. So like on the project I'm working on, we're hosting on Acquia. And yeah, like we've been saying, there's nothing that gets in your way. Like you just use the same Drush commands and Drush aliases you're used to for setting up your local in your make file script for setting up each tugboat instance. So there's, I mean, in the initial setup, there might be some things like setting up VPN access that are needed. But I mean, that's why we do have a custom setup and that really is valuable. Because Ben is there to help you to figure that out. Yeah. And so we did, I did a little sort of internal survey to developers that use tugboat. And we had talked about whether or not tugboat saves time, like how it benefits. And none of this is a science, but we were estimating that for peer review, it saves one developer one hour a week in peer review by being able to not have to stop what they're doing, download somebody else's code, make sure all the libraries are installed correctly and then start testing it by just having a link to go look at and see the build and then look at the repository has been a huge time saver. And then the other component that we looked at was the amount of rework. And we estimated that it takes anywhere from two to three hours of savings per developer per week for rework. Because if you've got a team that's invested and connected and wanting to collaborate and can actually participate without a technology barrier, when by the time that you're at the end of that two week sprint cycle and you do the big show and tell, there's a lot less surprises. Everybody knows what they're going to see and they've been an active participant in that process along the way. Yeah, Sally. Great, thanks. Any other questions? Take one more question in the back here. Yes, but talk about does that automatically? What it sees a change to the pull request it'll kick off a rebuild. Good question. And then it tears it down when the pull request is closed, right? Yeah, great. Thank you. Sorry, you're asking if you can manually kick off? Yeah, yeah, yeah. Yeah, so the base image can be either manually updated by clicking the update button or you can do it on a schedule with scripting. And if you need to rebuild a preview against a newer dataset, then you can go in and tell it to rebuild that. Yeah. All right, thanks everybody. That's our time, I appreciate it. If you have stopped by the Lullabot booth, we have tugboat stickers and other fun stuff. We can give you demos and talk more about tugboats. Oh, reach out to us. Contact me, matt at lullabot.com or just use a contact form on tugboat QA. Helena, which one with more bats? Yeah, thanks Helena.