 So, Adam Slit, I'm Rachel, and I'm going to talk to you today about UI development operations and some tools. I work at OpenTable as a user interface engineer, which means I fall about here on the spectrum. On the other hand, this is where DevOps was born. So what does DevOps have to do with user interface development? I'm going to give you a couple of examples. If you've ever had a situation like this where you expected the button to be on the next line and not next to the input, and something's clearly wrong, but it's really easy and oh, you've just got to type oh. There's tools that will help you spot that, not just in your IDE's but also in a pipeline or in a situation like this. You are using newer CSS properties, and the standard versions are ready in all browsers. So there's these specific beta prefixes, but not only are they prefixed but thus the implementations are slightly different, so writing this every time would really suck. There's also situations like this. QA takes a look at a page that you think is finished, and they find an inconsistency between two browsers. How could you spot that earlier? How did you fix it and keep anything like that from showing up again? Those types of problems and more can benefit from culture and tooling sometimes associated with DevOps. If you've never heard of DevOps before or the names all you've heard, you may be wondering what exactly it is, and that's actually a matter for a lot of debate, but there's a common image in descriptions of DevOps that reminds me of a situation we've probably all encountered. You know when you're working on a project, and there's someone else who's doing all the design, they spend these long and tense hours laboring over a Photoshop file. I probably went for it. The first that you see of this is when they email it to you, and you're expected to immediately code it up. There's a lot that we can't tell from these files usually, like how to handle names that wrap when they don't in the picture, or how things should look on smaller screens, and a big thing is like this should definitely animate, right? We can't tell from a static image how things should move. Even if we get notes that give us a hint like this should slide out, that gives us a lot of wiggle room, right? So it turns out that design complete isn't nearly as complete as it sounds. In a workflow like that, it's almost like there's a wall between the UI design phase and the UI implementation phase. When we try to pass our work over that wall, sometimes important things bounce off and hit us right in the face. Aside from being figuratively uncomfortable when that happens, it delays our projects. So we've probably all heard that designers and developers barely speak the same language. It turns out actually that developers and other developers have that problem, too. There's traditionally been a similar wall between application developers and operations, and this wall has been a barrier to taking ostensibly complete code and putting it in front of customers. Got a hungry kitty there. DevOps is often said to be about getting rid of that wall and getting features in front of customers faster. And DevOps does that with a culture that's focused on improving communication, cross functionality, measuring things and automating them. And in general, on bringing more dev into ops and more ops into dev. For example, one thing that's associated with DevOps is continuous process improvement. That word process is a major keyword. Because one way to bring a little bit more ops into our dev is to recognize that operations is not just a name for a role in an IT organization. It actually refers to how we operate, the processes we rely on to ship projects. So let's take a step back and talk about what process really is in this context. When I started making web pages in the 90s, I would brainstorm stuff by drawing code on paper, layouts. I was a real artist. Then I designed graphics and PaintShop Pro. And I implemented it by writing code in notepad, not notepad plus plus notepad. Verified it by checking it in browsers like IE, Opera and Netscape. And when I was ready to release my work, I used WSFTP to put it on a DreamHose server. So my process was really simple. My tools were very few. And I personally interacted with all of them through a GUI. When I started working for larger companies, more parts of that process became the responsibility of other people. I worked with product managers, designers, back-end developers, database programmers, and administrators, sysadmins, all sorts of people. The increased complexity and increased scale of the products that we were creating together required more rigorous processes behind everything. Especially things that I guess I felt were deceptively simple, like copying a file of new code onto the live server. There is a huge amount that I don't know about my co-worker specialties. And since the technology landscape is constantly changing too, it's really easy to end up with more of those walls between us, over which we try to toss our finished products and probably occasionally shout at each other. Process improvement aims to fix that. How? Well, we're in technology and we can apply technology to the problem. We also happen to be experts at all kinds of things, abstractions, writing code, user experience. We can apply all of the things we know about that to our process and by doing so improve it. So here's how I think of process. Our overall project really actually echoes that simple 90s workflow. Only now it's distributed across an organization. It starts with ideation, progresses to design, then to implementation, to verification, and then finally to release. DevOps is primarily concerned with the implementation, verification, and release parts of the process. Product and design concentrate more on the ideation and design parts of a project. Representing implementation as one thing is a little bit deceptive. There's actually a lot of cooperative implementations going on in that phase. And what we really need to do is parallelize them. Otherwise, we're going to make this same old mistake over and over of tying application development to incomplete UI development to the detriment of both. And we know better than to do this, really, if you take design, for example, a UX design process might involve something like brainstorming, prototyping the ideas in different fidelities than doing usability tests. And ultimately producing a comp to hand off to the implementing team. This process can, although it doesn't necessarily mean it does, happen completely independently. And best of all, it has a clear deliverable. Just like design, UI development can benefit from having a formal process with two major goals, to better support design needs. And to make UI code more reliable and easier to integrate for application developers. In order to do that, there are two key questions that we need to answer. What are we delivering and what's the process for it? The answer to what are we delivering is definitely more of an architecture question, but it interacts with the process question. I have found that UI components architecture is a pretty compelling answer. So let's talk a little bit about UI components, what they are, why they're so good. I define a UI component this way. There's ideally a single file that you import. That file already includes or is able to pull in everything a user interface object needs to function. JavaScript, CSS, HTML, images, and even copy. To use the UI component, you use a custom tag. UI components get us closer to our goals of improving our process, better supporting design needs, and increasing the reliability of UI code and applications. Building UI component helps design because you can enhance the component over time with all the details that make an interface engaging and have that great design appear everywhere consistently across the product. Breaking whole page comps down into smaller constituent parts allows us to start designing in the browser sooner. And we avoid that problem where we can't really change within evolving design because if we try to touch one part, all the other parts fall over. And as time passes, we can start to prototype different experiences for new features with our own already production quality components. UI components improve the reliability of UI code and applications by encapsulating the script, styles, and markup that need to work in concert behind a single custom element. For example, internally controlling the template of the UI component reduces the chances that the DOM will change in a way that breaks styles. The single element also makes a UI component easier to use because instead of having to potentially change something in HTML and in JavaScript to enable an option, a single attribute on the custom tag can transparently orchestrate things. That's perhaps the most important thing that with UI components, we provide a clear programmatic interface that simplifies complex UI-specific concerns for other developers. So because UI components are components, they help clarify our process too by giving us that clear deliverable that we need. And it can be specified, documented, and tested. They also allow us to update UI code without also having to update application code or the other way around, which is really nice. So that answers what we're delivering. Now we have to figure out what the right process is to produce it. Because while delivering a UI component is a really good start, we have to have something to keep us honest, something to make sure that our UI component continues to work as advertised every time we make a change. This is where we finally get into the tooling mentioned in the title. The tools that we'll be talking about help us create an automated pipeline in support of our newly emerging UI process. An automated pipeline isn't the abstract kind of bird's eye view of our process that we've been talking about so far. At this point, we're ready to zoom in and think about when we actually sit down to write code, verify that it works and release it. At that point, there are a lot of specific routine tasks and useful enhancements that we can automate. Then if we run those tasks in a sequence automatically, the result is an automated pipeline. Having an automated pipeline is a good thing for several reasons. If we don't automate, we have to do every granular thing in our process manually, and we have to remember to do it every single time. So the more of our process we can automate, the more that we can do with our time. We free ourselves up to pay attention to the parts of our jobs that require human creativity. Automating a pipeline makes our process faster too. Although it takes time to set up an automated pipeline, it's generally quicker and more consistent than doing things manually, which results in a net savings of time over time. Second, automation helps prevent human error. We all make mistakes, and automated pipelines helps catch them before they cause problems for customers. One of my coworkers also says, UI tests stopped me from making a mess. I don't have to understand 100% of the stack. I can make changes and know that I'm not gonna break parts of the code I don't usually work on. Automated pipelines make it easier and safer for people with varying experiences and varying specialties to contribute code. That answers how we're gonna produce it. We're going to create an automated pipeline to support our process. From here, we'll talk about the types of tasks that we would want to include in an automated UI pipeline and why we'll want to use them. Then to wrap it up, we will talk just a moment about actually setting up an automated pipeline. There are a few loose categories of tasks that we'll set up. Build tasks, test tasks, and distribution tasks. Build tasks help during the implementation part of the project. Test tasks help during the verification part of the process. And distribution tasks help during the release part of the process, giving us our UI pipeline. So we're gonna start with build tasks. These are like specific tasks that are part of that pipeline. Building comes first because most of the time the code you test will be the code as it runs in the browser. You may have already used some tools that fit in the build category if you've ever used a CSS preprocessor or even used a CSS validator. There are three specific build tasks we'll be talking about, linting, preprocessing, and post-processing. So let's start with linting. What's the difference between validation and linting? Both analyze code for errors of syntax, but code can be technically valid while also violating an industry breath practice. So, for example, the validator would catch all the syntax issues on the left, and a linter might warn you that your selector on the right is super redundant. Linting typically makes validation irrelevant though. In the introduction, we looked at that typo that rope our intended style. The CSS linter would spot that typo and warn us about it. A linter can also verify that code matches our internal code style guidelines, which is one reason actually I think it's nice to have a linting step in your pipeline instead of relying on everybody's individual editor. Generally, the code that you should lint is the raw source code. So if you use a preprocessor, you should definitely find a linter for that. A CSS linter won't be able to handle the syntax differences here. And a SAS linter will have options specific to its features. Then next, if you're into that sort of thing, would come a preprocessing step. Some people use preprocessors because they prefer different syntax, like whitespace sensitive. Generally, the reason for using a preprocessor is that it adds functionality like if else logic, control structures, variables, math, mixins, all kinds of stuff. If you're comfortable using those features, they can definitely make your code easier to write and more maintainable. But what if you don't want to deal with a different language? After all, CSS is probably going to add variables. Why not just try them out? And for math, it has the calc function and that addresses most everyday needs. And instead of using mixins for vendor prefixes, it'd be a lot simpler to just write the standard thing and have something else do it for you. And that's where post-processing comes in. This is the baby version of post-processing, which is going to sound silly right after that talk. By the way, the difference between preprocessing and post-processing is pretty much what it sounds like. With preprocessing, you write SAS and you get CSS after you do the processing. With post-processing, it's more like you write CSS, it's not a different language so much, and you get transform CSS after processing it. At the start of the presentation, we looked at a problem that existed because some browsers didn't yet support the standard version of the property. There are some post-processors, auto-prefixer, for example, that solve this problem by taking a style sheet with standard CSS properties and outputting the vendor prefix properties. These not only save time, but they actually help with maintenance because they automatically stop outputting prefixes when support for the standard is wide enough. So the next category of tasks that will be automated are testing related. Testing should always occur before distributing code. And you should always test built code. If you have ever visited a site in multiple browsers just to confirm its appearance or if you'd use like browser shots or why slow, then you've done some testing that could definitely be automated. There are three types of tests we'll talk about that are useful for UI implementations. Unit testing, visual dips, and end-to-end tests. Starting to test can be really, really intimidating. Testing a GUI is a pretty specific area and the vast majority of information on testing is not named at languages like CSS and HTML. There's also a lot of vocabulary in testing, including lots of different words for very similar things. Most of which end up being used in contradictory ways by different people and some of which originated in and refer to very different kinds of software development than what we do on the web. There are numerous reasons why we should test anyway. Writing tests gives us an opportunity to think about our implementations from another perspective, typically leading to better code. Tests prove our implementation works the way that it's supposed to, and it helps us keep it working. When we test, we can fix bugs, refactor, and change visual design while ensuring that we haven't broken something. If we broke something, a test should fail, and then we'll know it before we go ahead and release it. It's pretty emotionally gratifying to see test passing to. The first type of testing that we're going to talk about is unit testing. A unit test verifies the functionality of a piece of source code. With JavaScript, this can be really straightforward. You take a function, you pass in an argument, and then you verify that the value returned is correct. With CSS, unit testing is not quite as clear. If you have a button, you could definitely check that the computed style of the button matches what you wrote. But correct is a little bit more of a complicated proposition. The cascade and inheritance of styles means that we often don't have or even actually want a static checklist of exactly what style should apply to the button component. There are things that we want to be different, depending on context, like width. This doesn't make unit testing CSS impossible. It just means that what you test takes some extra thought. In our button example, for example, we have two different structural variations of this button. One is this default button. It's meant to display inline next to any content next to it. And this obviously would have to be like inline block or inline flex. But then there's another structural version that's meant to be in its own line and completely fills space. And that would obviously have to be either block or flex. We could check those properties with a CSS unit test well under any circumstances because any other values than those for those particular two structures would mean that something fundamental is broken in a way that is likely to affect the way that the button looks or acts. So those are the kinds of rules that you could consider unit testing. However, because of the context sensitivity of the cascade and inheritance, visual diffs are often a more effective approach to CSS testing. My perspective is that CSS unit tests are best as an addendum to visual diffs, then because then they can provide useful specifics about the cause of the diff. So remember at the start, when we looked at that layout problem that only showed up in one browser, and I asked, how will we spot this early on and make sure after we fixed it that it stays fixed? Visual diffs are really useful for verifying layout from a release to a release. A visual diff process automates the act of opening a page in a browser and then taking a screenshot of it like that. When we first release a UI component, we take a look at the screenshots and we determine if it looks the way that it should. And if so, we make that component the baseline and we save it. After that, the next time the process runs, the same automatic screenshots taken, only now the process has something to compare it to, so it'll compare it to the baseline. If the two screenshots are different, like they are here, the visual diff test fails. A visual diff process doesn't eliminate the need for manual testing, but it decreases it because we only need to personally review the screenshots when either we're creating baselines for the first time or when problems occur. Visual diffs run a lot faster than manual testing, so we can test more browsers and more screenshots with much less time. Another interesting thing about UI components specifically is how they interact with visual diffs. An individual UI component actually becomes a visual unit with a limited number of variations that we can then thoroughly test and release that way. I don't know how this happened. So it was a to-do slide. So there's one last type of testing to talk about, in-to-in tests. This is one of those pieces of testing vocabulary that can be pretty confusing. Sometimes you'll hear when people tell you what in-to-in tests are that they should run against a fully integrated product with all real services and all real data. Other times, people say no. At heart, in-to-in tests step through scenarios that simulate real user workflows from beginning to end. For a simple example, an in-to-in test might go to a page, find an input, type in a name, click a button, and then verify that the name was saved. In-to-in tests are good for verifying that user interfaces actually are functional, that clickable areas are clickable, and that menu is open or close. For example, a button UI component could have an in-to-in test that sets up a button, tries to click the button, and then verifies if an action occurred. If there were an issue with the button that made it inert, like the one below, due to the disabled attribute being present at the wrong time, the in-to-in test would catch that. So let's talk about a couple of specific testing techniques. Test pages and mock data before we move on to distribution. For UI components, we create test pages. The test page for button contains nothing except for the code required for the button component. Test pages are multi-purpose. They're useful for developing on, as a place to view work in progress. And the same test page can be used to run visual-diff screenshots and in-to-in tests on. Mock data is also really useful in testing user interfaces. Mock data is data that is specifically there to provide stable information to test with. It isn't real data coming live from a production server. An example might be a mock guest list. Setting up mock data makes it easy to test scenarios like this one, where the label of the button is very short versus very long. The longer text could have overflowed and look bad. Or if not, it could have broken the layout of a page using the button. Setting up a test page with mock data and using visual lifts helps ensure that that does not happen. Our code is now all built and tested. The last category of automatable tasks that we'll cover are distribution related. Distribution starts at the point where we take new verified code and share it with others. If you've ever used Bower or MPM to consume an open source library or plug-in, you've benefited from formally distributed code. Specifically, we'll talk about pull requests, versionings, and making releases available to a package manager. It's all well and good if we have a build process and test tasks that each individual developer can run locally like this. We do need that, but the value of an automated pipeline really becomes apparent when we move away from our personal laptops. For example, we can integrate our pipeline into GitHub. When using the GitHub workflow of forking a repository and submitting a pull request, we can use continuous integration tools to test, automatically test the pull request code using the pipeline that we just talked about. A continuous integration tool specifically is a tool that provides functionality that helps achieving a practice of each individual developer merging their code up into the main code multiple times a day. You can see that here that would be useful. If you automatically build and test every pull request, it makes it really clear which pull request can be safely merged. Since that pull request passed tests, it could be code reviewed and then merged. At that point, the repository would create a new version of the UI component that's theoretically ready for release. We can continue to use the continuous integration tool to automatically release a new version of the component. So let's talk about versioning. Versioning is good even just for talking about versioning. For example, if one of your versions is totally messed up despite your best efforts or has a bug, you can say, make sure you have at least version one, two, three. It fixes that issue. And when it comes to versioning, it's best to take a look at the semantic versioning document. Another useful thing about versioning is that we can use that version number as a tag on the associated commit. And we can then use the continuous integration tool to make that automatically. Tagging a UI component with a version number as simple as it is makes it consumable in at least two ways that I know of. First, someone could go to GitHub, find the release, and download it as a zip. Second, we can publish our component to Bower, and someone can use Bower as a package manager, since Bower uses GitHub and GitHub tags. This is the absolute minimum workflow required to formally release a new version of the UI component. It takes time to build up to this workflow, though. You need to have good test coverage, set up the continuous integration tool, and that doesn't happen overnight. So that covers the basics of what an automated pipeline is and what we kind of want it to do for UI from building to testing to distributing code. Now that we know specifically what we want our pipeline to do, let's take a peek at how we might set it up. But I'm going to put some actual setup code on GitHub after if anybody wants to look at it. So there's a few basic things that we need. GitHub repository, code editor, Node.js, NPM. And if you're a GUI addict, like me, be warned, this uses the command line a lot. Weird is going on. All right. I've gotten to where I actually really like the command line. It kind of takes me back to my letting days. However, it's not actually for fun, but for a more practical reason. We can directly reuse what we do with these command line tools across platforms the way that we can't with GUIs. Like, GUIs are great for providing friendly interfaces to individuals on their personal machines, like Composab or Scout. Unfortunately, we can't effectively automate them. So instead, we'll use Grunt to automate all of our tasks. There are other options, and I don't really recommend one over the other. But one of the nice things about Grunt is that there are a lot of plugins that can be used with minimal effort. Assume we've got some pre-written code for a simple component, like the one we've kind of been talking about. The component sets up the styles and template for a bin. First, we need to install Grunt. Since we've stipulated we have NPM, we can use NPM to install it. Then to use Grunt after installing it, we create a Gruntfile.js. In the Gruntfile, all you need to do is load the task plugin that you're using, initialize the Grunt config, configure your specific tasks, and set up any custom tasks. Setting it up is really simple. This, for example, would set up CSS Lint as we talked about at the beginning. You can basically do this with all of the various plugins that I mentioned, and it would just be a matter of configuring them. Like I said, I'll put the rest of that online. The whole takeaway that I hope you have from this is maybe a little bit of context and about bringing operations more into UI, and also the ability to perhaps communicate a little better with operations people at your office. That's about it. Weekending.