 All right, hi. This is more of a turnout than I was expecting for such a dry topic. So thank you guys for coming. I also want to say thank you very much to the conference organizers. Thank you for the amazing keynote. That was really great. I walked out of that feeling really inspired. So hopefully I'm not a let down right after that. But maybe you guys will find this interesting, we'll see. So firstly, hello. My name is Mike. Calhoun, I'm super happy to be here in Pittsburgh. This is the birthplace of the Big Mac, which I didn't know about. Also the Klondike Bar from Pittsburgh. That's cool. I spend a lot of time in Philadelphia. So I finally got a chance to decide between Wawa and Sheets for myself. And I will not just close my pick, I'm not picking any fights and tribalism here. My wife's actually, she has a family from Johnstown. So we often get asked, have you ever talked to Bob lately? Yeah, okay. That aside, I'm from Vancouver, Washington, which is probably known more for its more popular suburb, Portland, Oregon. I live there with my wife. We have two cats. We have a new baby. Also the only thing that I called out specifically in my speaker bio is our Corgi Ruby, athlete named. Pause for a while. Okay, you and I may know me from. I am the current chief technology officer at Life.io, though I am sad to be stepping down from that role and starting at Stitch Fix next month. I'm super excited for that though. So I talk really fast. And if you can hear the tremble in my voice right now, I'm trying really hard to concentrate on that. Last night I did a test run. I was at like 35 minutes exactly. I previously talked about failure and I went through those slides so quickly. I think it was maybe 10 minutes long. Everybody got extra time to grab coffee. I'll try to do something similar. And I'm excited to talk about something other than failure. Let's not talk about how bad I am at my job. And it's a little bit of preamble to that. I'm gonna reference a lot of companies and a lot of products. I'm not specifically endorsing any of them over their competitors. I think the products we use were great. I think the products we didn't use are also great. A lot of them have people here, sponsors, whatnot. I love everybody, everybody is super awesome. This is a great industry we work in. So please don't take that as an indictment or lack of indictment or endorsement. And then I have a podium, so I'm gonna say a few words on data. This is not exactly connected to the topic of my talk, but I think it's really important that we keep talking about it. And we generally work in information and data and we have a certain amount of trust that our users expect to do the right thing with that data. And these are becoming like huge topics and they are getting thrust into a national conversation, especially in the light of things that are happening with Facebook and with Cambridge Analytica. And when other industries made rapid advances in their arenas, regulatory control and oversight emerged. I look at the Industrial Revolution. We were forced to establish fair labor practices. Those were overseen by the government. Nuclear science develops with the use of atomic weapons and the use of nuclear energy. And we established the Nuclear Regulatory Commission. The EPA emerged as in response to abuses from industry. And so maybe something akin to a consumer data protection agency is what we need. I'm not the person to litigate that. I am not in politics. I just, again, someone gave me a microphone. But that said, we do have to consider that not all societal and political problems have technical solutions. But until then, it is up to us to be aware of the laws that attempt to govern our industry and broker trust with our users. That unseemliness aside, I wanna outline a few terms for this talk specifically. These were things that we came into contact with. And I just think it would behoove us to establish a shared vocabulary. So first is the Health Insurance Portability and Accountability Act. And this is the main culprit for why we initially kind of took some initial steps we did that turned out to get us into trouble or at least force our hands on a lot of what we wound up building. So this was enacted in 1996. And it's for the United States. It's a little foreshadowing, I guess. And it had two main purposes. It was to provide continuous health insurance coverage for workers so there was a logistical coverage component to it. And then more specifically to us, it was to reduce the administrative burdens and cost of health care by standardizing the electronic transmission of administrative and financial transactions. I wish I could read that without my notes, but it can't. So it's the first time really the government is taking steps to protect your electronic health data. And this is really important because before there hadn't been much in this arena. We didn't have much in terms of rules about disclosures of reaches, what the practice are. And they're still a little ambiguous. There's parts at HIPAA that literally say a consumer of this data will make a best effort. Well, how do you define a best effort? I don't know. I didn't write it down on a piece of paper and leave it at the coffee shop. In 2010, they add breach notification rules to extend to cover non-HIPAA entities. So now it's not just doctor's offices and hospitals. Anybody that's capturing this data, if we encounter a breach, they were required to notify the Health and Human Services Office. And in 2013, they add what's called HITECH or the Health Information and Technology for Economic and Clinical Health Expansion. So they continue to expand the rules, regulations to accommodate new and developing technologies. And then in 2016, we see additions and provisions for cloud services as that is the direction the industry is gradually starting to take. A little late to the game, but required nonetheless. I guess we can't expect rules and regulations to keep pace with technology. That's a dream. Next up is data sovereignty. And this is sometimes used interchangeably with data residency. And that's not, it's similar, but not exact. Sovereignty, data sovereignty is the idea that data is subject or data are, data is plural, are subject to the laws and governance structures of the nation it's collected. So in a world, I could be a German citizen, but I live and see a doctor in the United States. If my data is stored in the United States, it's subject to United States law. It is not German law. So in this case, it would be, in that case I aligned it would be subject to HIPAA. And the common criticism here is that data sovereignty measures do tend to impede or have the potential to destroy processes in cloud computing. This was a big reason why they started to make those cloud computing provisions to loosen those restrictions. Data residency is a law that basically requires that, say this right, if you live in a country that has a data residency law, your data must be stored and processed within the geographic boundaries of that country. Oh, it died. Oh, okay. You can kick over the mic, it turns out I'm clinging this thing enough. Oh, hey, cool, I'm good, awesome. All right, let's try this. Can you guys hear me okay still? Cool, this is a good time killer. Okay, so yeah, so process and or stored inside the country, Australia is a great example of this. Australia is a great example of this. They, if you're capturing any kind of health data there, AWS, Sydney, poor shadowing, maybe a little bit, spoiler alerts is a great solution to those equations, that problem. So let's talk about continuous deployment. And I may have cheated, I'll fully admit to that. Continuous deployment versus continuous delivery. Continuous delivery means your code is in a constant state of being ready to be deployed. Sometimes you can't just automatically trigger that. We had client related concerns, they wanted to verify some things in our staging servers. Oh, oh, I'll go through it at double speed now. Okay, for production, this was continuous delivery. I use an example here that I'm gonna give, that is more akin to continuous deployment. But I mean, it's like a half a step short of continuous delivery. I like that quote that I kinda dug up there from Twitter. Okay, so let's actually look at the case study aspect of this. The problem is, I don't know if it's the problem, but we're gonna be a healthcare startup. All right, this is exciting, everybody's fists are in the air. I use a lot of search, I just look for images on Google and put them in my presentation. And so, great, we're gonna be a healthcare startup. We're gonna capture sensitive user information. We're gonna expect that our users trust us with this. And let's see how it goes. But more specifically, we're going to be a SaaS startup. So we're gonna put this application and out there in the world, we're probably gonna use a cloud provider. And it's going to have a multi-tenancy single platform, everybody will log into it. Which brings up this occasional myth of convenience is that we have great tools. We just saw an amazing keynote about this. Like we have some great tools about reducing barriers to entry. I don't need to know, Deva, I can deploy this to Heroku. I don't really necessarily know much SQL. I have active record for that. I don't have to build my own version control. I have GitHub. Back in the day, we all had those cartoon turtles with SVN. And there's a whole world of CI apps out there to do this. And this encompasses a majority of what we can reasonably expect to need. But then, sometimes, you wind up in these situations where back to our startup. You've decided to be a SaaS company. You're going to collect sensitive user information. We're going to assume all of our clients are in the United States. Whoops. And then, let's have our first client not be in the United States. And let's look at their laws. And let's evaluate our infrastructure. And you find one major conclusion is you've made a huge mistake. All along, you've made these assumptions that are just completely thrown out the window. So you have to take a look at your international logistics. And this is the first time we'd considered requirements beyond HIPAA. And this is kind of weird because I said Australia and Canada has their own set of rules. The United States has less restrictive rules. In some South American countries, you see these rules written into their constitutions. The UK had a set of rules and then gave them up to join the EU. And then something else happened where they're developing their own set of rules again. So we took a look at these potential global entities because we knew this was going to be a problem. We worked with a group, like think of, I'm from the Oregon area, the Portland area, so think of Nike. They have a headquarters in Beaverton, Oregon. So you're going to have a fair amount of users there. But they also are global. You're going to have headquarters in Africa and Australia and Asia and South America. And these are all going to have their own set of rules. So you have to take stock of what works. And that's when we came across AWS. And you can see the United States more than covered. There's that one up there in Canada all over the UK. For us, the big mover was the data center down there in Sydney. And we realized that we weren't replacing our Heroku setup. We just wanted to augment it. We needed to accommodate these rules. And so we knew we had a place where this was possible now. We had AWS. We knew we had our American server. The question was, how were we going to integrate this with our kind of tool chain for deployment? So we came to option one and maybe the images on these will give away how we went with this. But so initially we had this discussion, like what if we created a new branch for every region? So you have production USA, production Australia, production Brazil. And this seemed really, this is like the most obvious, like let's just come to this quick conclusion. Here's what we do. We offer some, as a company, we offer some basic white labeling aspects for our clients. So that seemed like it would be a lot easier to accommodate those. You're going to handle region-specific requests easier. If you have translations, for example, I can just swap out the English and put in whatever else I want. And there's a low initial time cost. We're just kind of creating branches, like we've all created a new git branch. It's pretty easy. But the disadvantage is that this becomes a complete logistical nightmare. This is kind of what that image I have there was getting at, is imagine your code gets approved on staging. Everything's looking good. And you're not just merging it into production now. You're merging it into five different production branches and keeping all those squared away. And then God forbid you wind up in a scenario where maybe one of those production branches doesn't get the same code as another one. Maybe that's a translation case and then like that. It's just not sustainable, at least in a way that's timely and efficient. And then we looked at option two, which was what we called regional deployments. And this got us to the point where we maintained this one code base. It meant that all of the translation files would have to sit in the same repo. It continues the notion of the single platform multi-tenancy, so all of our, I mean, granted we're cloning the entire application everywhere, but it's still just the one application. And we had the disadvantage though, we have some loss of functionality. As a joke, maybe it was later when I was working at, I said international conglomerate business code. Let's stick with Nike as a better example, not endorsement, just let's stick with Nike. If they have a user in their Beaverton headquarters and then a user in their Sydney headquarters, we offer some light social capacity. Those two users could interact with each other. In this setting, they can't because the Sydney users data can't leave the Sydney database because of its geography. I have more on that later, but that would be kind of the immediate losses that you're kind of confined to only being connecting socially with your users in that same office. Anyway, I'm gonna build this as I put it here. We go with option two because that was the case, we talked to the clients, they said that's fine, maybe down the road we can do something to resolve this, but if we can, so that moves to the implementation phase. AWS offers a great service, elastic bean stock that we used and it worked out really well for us. We knew we were going to continue to use GitHub. That was simple. We used Semaphore for our CI server and we continued to use Heroku in the United States. So there were no changes to our existing client base, to our existing infrastructure. We were just adding these new, and I would say bolting up, that kind of is dismissive towards it. We were just augmenting and expanding. We introduce AWS, we use these containerized offerings of elastic bean stock. Semaphore had great support for AWS, and I can run this. I can't show the app we used, or we build, excuse me, but I made this little demo app and I hope this spot comes through. Is that, can everybody see that okay? Or I'm gonna kind of walk through it. It's really easy, it's really light. On the top you have a test suite. There's one spec, one feature spec. It just says it expects the goggles to equal to nothing. This is, this joke will come together and it's just gonna render what's on the bottom. It's just the hello world page that shows an image. And then I have a small test suite. I had this once in a video, I had tried this once in a live code session. Both of those didn't go well, so we're going with screenshots. So you see the test suite's passing, and you can see it's on the local computer. Local host is kind of up there at the top, and that's all it does, very small. So this is all with an intention to get this to move a little quickly. So here's a semaphore dashboard, and there's a few things to call out on this, is at the top, I have my master branch, and that's passed. And for all intents and purposes, I'm using this as my production branch. Below that, there's this little section called servers, and we have our United States Roku server, where we're deploying this to. And this kind of mirrors what we had for our infrastructure at the start of this whole scenario. So then, we add our new application in AWS. This is your dashboard for Elastic Beanstalk. You can see in the top corner there, we know we're in Oregon, that's the region we're going to deploy this one. For some reason in this scenario, Oregon and the United States are two different groups that have their own laws, which sounds crazy, but actually, Canada passes rules governing health data by province, so it's not that crazy. I have a container, I have a single app, a little demo environment Rails Conference 2018 app. This seems much funnier when I wrote it, and apparently, this thing is my mom, because it's calling me by full name. I heard one chuckle, all right. Oh, you can't see it? Oh, is it too good? Whoa, I don't go forward and backward. Is it too, oh no. I don't know how to do anything about that. It just says my name. So there is an arrow, the first arrow on the right points to a toolbar, and you can see the region where you're in. So that would say Oregon in this case, and this will change to say Sydney to change where it's, AWS is a way of letting you know you're into the correct server. Let's see, I'm gonna look over here more often now, so I know, okay, so back to servers, we're gonna add this new one that we just made, and now, oh, all kinds of sounds. So on this next one, I put three screens here, originally these were separate, and now it's a little slapdash, but they're all kind of ideologically linked. The first one on the far left, you have set up deployment for Rails 2008, so this is the app that we've made, and they offer some out-of-the-box solutions that lists scrolls down for a while. Those are the first four, we needed elastic bean stock, so we click that, and it takes us to the one that we're right. If you're gonna do continuous deployment, choose automatic. If you're gonna do continuous delivery, choose manual, and you retain some control over that. For this purpose, we'll go with automatic, and then the bottom right, oh yeah, it just asks what branch you wanna deploy. You pick master, you can use whatever branch, you know, your mileage may vary. Once you go through that, I won't give you my AWS credentials, but I'm gonna call attention to the region, so you get the list of regions that is offered associated with this account. I would select Oregon in this case for that little piece that you couldn't see, but it was at the top of the screen, I'm promising, and it automatically pulls in the name, all the known application names, and all the known environment names. So I choose my demo app, I choose my demo RailsConf 2018 application, S3 buckets kind of move, it gives you an option to pick a new one, or create one if you want to, it's just kind of where it's gonna dump all of your code to before it deploys it to its server. Oh, I highlighted all of these and forgot to fast forward. All right, so that's it. You give your server a name to make it meaningful for easier navigation, and because you're a good citizen developer and a good fledgling DevOps person, and it takes you right to this, this is awaiting your first deploy, my commit message was because, you click deploy because, and you see that your application is now deploying, going back to your dashboard, you have production organ in a state of being deployed, your tests are all still passing, so this should be fine, eventually your code shows up and you can navigate it to it through whatever link you have. We expanded this a little bit for this demo, so now I have four regions we've added, we have Canada and that's, I think if there's a Canadian national here, I'm pretty sure it's in Toronto or if somebody knows, but I'm not positive or outside of Toronto. Still got Oregon, we've got Sydney in here now, and I've still got the United States Heroku app. Now this is going to be uncomfortable and I don't know if we'll see all of it, let's see what happens, so I put together a video to kind of show all of this in action. So is it playing? Okay, cool, it doesn't play on my screen, so I'm gonna try to navigate off of this thing, this would be great, so I made a change, I'm going to commit this now. The dumbest commit, yep, there we go. I forgot what the change was, I just changed the title of the page, so this pushes up to GitHub. My master branch picks it up, I'm playing this at double speed, so suddenly it's gonna jump on me and I'll get really nervous not knowing how to navigate it. So the master branch is building, it only has to pass that one test, which shouldn't take too long, this is a free account, I didn't pay extra money for the purpose of the demo, not thinking I would have to navigate it or narrate it like this. Give it one more second, there it goes. All right, so that passes and that kicks off all of these builds at once. The first two come up, it's automatically now deploying to Canada and Sydney and those take their own respective minutes or two. So this has run the test suite for me. In the case of these AWS builds, it's taking the GitHub repository, zipping that up, sending it off to that S3 bucket and then unpacking that onto the server. In test runs, I would finish that sentence and this would have been done, but I'm speaking a little fast, there we go, all right, Sydney deployed first, there's the winner. Sydney deploys, Oregon starts building, Canada finishes, Heroku starts building, I have tabs open that you can't see, but I'll click into them, so that one, I don't know which one I clicked into, I can't see it at this stage. That looks like one of them, I think this is Sydney, so we see it's deployed to Sydney, I'm going way too fast, I'm sorry, I can come back on this or pause it. There's Heroku, briefly, I click into the Heroku app, so you know the Heroku United States one deployed, there it is. And so now we're just waiting on Oregon, so we saw, I don't know what in order, we saw Canada, we saw Sydney, we saw United States, default Heroku is in Northern Virginia, I think, and Oregon's gonna be the last one to cross the finish line and it's done, I click over to it, yep, there it is, I think that's the end of that video. Yeah, okay, so that was kind of, so that at that point, that was basically the exact same infrastructure we built out for ourselves. Every time we pushed a master, it would automatically trigger these deploys, it would just go throughout the globe. It really streamlined a process that we were having panic attacks about. And so we had some findings from this, because this is a case study. Our pros were, this was very effective and very scalable. You saw a lightweight demo, it's even more effective, absent the nervous narration. If you all just like, while we push this up and it's done and we all got to sleep at night easily. But there was a steep learning curve in getting there. Again, everybody is super awesome, I love all of these products. AWS Elastic Beanstalk, its setup was a bit more complex than Heroku. And then getting all of this to work in harmony was even a little bit trickier. But once you go up that learning curve, it's pretty easy to manage. Managing all of these server configurations themselves could be tricky, you have your environment marials, you need a kind of a more scalable solution for replicating your application harness. And that initial loss of functionality going back to the social features that we had lost. So we were thinking about next steps. And this feels a little bit more weird to talk about after the keynote, but it seems like there could be a case here to be made for decomposition of the application. This is a monolith we were deploying. And kind of the vector we were narrowing in on is what if we took our identifying information, our PII protected identifying information and PHI protected health information, and what if we built like a data service to put those in those regions and then sent off to a social server wherever only user ID. So as users requested friendships, you're just capturing those IDs. You can encrypt those AES-256. And in theory, again, I'm also not a lawyer, but in theory, this would accommodate those rules because you're not actually sending this identifying information out. To have any kind of backtracked attack on that, you'd have to breach the server with the social data with user ID 5678 is friends with 85309 and then know which regions those users were in and then breach those databases as well. Ideally, you'd be able to detect when someone's orchestrating that sophisticated. I mean, ideally, attacks happen all the time that you're kind of dumbfounded by. Then you have to consider beyond that the operational costs. This is not cheap. Yeah, you went from supporting one lowly Heroku server in Northern Virginia to all these servers across the globe. So it's like in those regions, their prices expand depending on the remoteness of the region, cost of electricity there. So you need to build that cost in. If you suddenly find yourself dealing with, and I mean, let's not kid ourselves, anybody like an unpar with a Nike who has as many global offices probably has deeper pockets. So you can build price that into your contracts. But if you're operating it just out of the gate, I would not advise doing this as like step one with your startup investment capital. But that said, there are some recommendations I can make on this, which is really upfront, maybe the second was elegantly, think very hard about your audience before building something. I granted, we would never have expected that our first, we expected as like our first clients were going to be in the United States. Next thing I know, I found myself flying to Australia and flying to the United Arab Emirates and learning their laws. And it was a bit jarring to think that that, if I had even just considered a global infrastructure out of the gate, not to say I would have built it, but we could have made provisions to accommodate that early on, at least had a more robust plan of attack. Perhaps it was like just in time research that we did for it. Are you storing sensitive data? Know that that data is subject to laws and those laws probably not gonna change at the same pace as your application, but are going to change and need to be aware of those, need to be aware of how they may affect your compliance or if compliance is even a requirement. And I mean, at the end of the day is just because it's there doesn't mean you need it. This is kind of like going back to the beginning of this, like we could have had those considerations. We could have, we talked initially about building this application like, oh yeah, let's do microservices out of the gate. We didn't because we wanted to move quicker and building a monolith was more native to all of us. Now we know that maybe down the road we probably would have changed that, but yeah, that's all I have for you everybody. Thank you again. My name is Mike Calhoun. You find me on Twitter or GitHub or anything that's social media that I might sign up or I usually call Mike 0-1-1 that seems to always be available. Happy to take any questions. If you look at the Life.io website or the question is how did that affect our QA team? If you look at the Life.io website there is one QA analyst who's very talented her name's Nicole and she hates me. She just hates me. At some point in time we made an accommodation to say that what was working in our default Heroku production would probably be working across the globe and this is more or less true. The biggest QA burden in that case is translations. Australian English is a different translation from Spanish, different translation from American English. So, but yes, it is a tough process. Very robust test suite and when we have to deploy to five servers we kind of give her the heads up. That's about it. So, yeah, I'm gonna try to restate that. The question is do we ever have a feature that we want to deploy to the United States or just any region but didn't want it available in other regions? And yes, that has happened a couple times. Some cases have required just creative database tricks like kind of having feature flags. There's a gem that we've had a lot of success with called Flipper that was really useful for that and that kind of allowed us to enable and disable. Our database model is predicated on this notion of you have a parent organization, a parent organization has many companies and companies have many branches. So, I don't know what a parent organization, like you could have multiple branches throughout the world and maybe they're in a parent organization, shoe companies. And so we have features that we only want shoe companies to see. We could enable that to just everybody enable a Flipper. Yeah. Yeah, I can speak a little bit to that. The question is could we speak to basically how we had team coordination to determine what the requirements were and how we would be in accordance with them. So, the way you phrased that question was great because it implied that our team is a lot bigger than it really is. In most cases, our side of it was myself and one or two engineers usually discussing we retain a legal counsel and usually if we're going to a new region, we'll try to find some legal counsel there to make sure we're accommodating them. But then on the other side of the table, enterprise level clients that are operating at this scale, they have their own counsel and like security checks they wanna verify. So you work very closely with them. And you push back where things are unreasonable and you identify kind of what their requirements are. Australia was great, our contact there was a protective identity, but he's a CIO named Tim, and he just worked with us very carefully about what the Australian law was and didn't kind of have expectations that we went into it knowing what it was. But it's asking questions when you have them and making sure every step of the way here's what we see is appropriate from our side, here's what we know we can implement, here's where we have to reduce scope because it's not gonna mean accordance. And then verifying with them like please, this is what we're seeing, have your team check it as well because most of these laws, most of these laws it's like, whether or not the breach or anything like that is one person's fault, everybody will take blame. If we had a breach for a major client, I mean sure, we'll get scapegoated and probably rightfully so good in that story because the fault of it, but they will take that heat as well. So there is this notion that no matter what, you're in this together and they wanna make sure you're not a vulnerability and working to make sure they're satisfied as best you can do. The question is if it couldn't be exposed to the internet if it was old because the data was solely on an internet, is that, yeah, that's, we would regrettably turn down some money and wish them well, so. All right, thank you. Thank you.