 Okay, so welcome everybody to DevOps Tools for job developers. We're very excited to be able to talk to all the folks at KubeCon about how you can up level your development skills by adding in a bunch of cloud native Kubernetes and deployment skill sets to kind of extend what you're doing with continuously deploying to production environments. My name is Steven Shin. I run the developer relations group at JFrog. Have been a long time Java developer and was doing DevOps before we even had a cool name for the for the word. Basically, we were just figuring out how to do better automation to production. Ekshel, do you want to go next? Yes, thank you. Thank you, Steve. My name is Ekshel Ries. I'm from Mexico live in Switzerland. I'm a support developer official title. It's principal consultant. I'm a Java developer most of the time, but I also do full stack because you have to be flexible in this world. Melissa. I'm Melissa McKay and I come from a developer background as well about 20 years now. Just recently I decided to become a developer advocate for JFrog and I'm hoping that all of my development experience. I'll be able to use that to share with you at talks virtually here and hopefully someday in the future. Physically, maybe we'll cross paths at a conference. I'm pretty excited to be here today. So. Yeah, so we're looking. We're all looking forward to bad to in person. Cube, come back. My name is Barks of the Gurski. I am the chief sticker officer of JFrog speaking about in person conferences. I would love to be there with you and give you some awesome stickers of JFrog also head of developer advocacy and being developer myself for more than almost 20 years now. I really love speaking to developers about DevOps. I mean, that's exactly what we're going to do now. Working with a company JFrog that does tools for developers can gives you the perspective of both worlds and we'll do our best to share this joint perspective with you today. Welcome. Alright, before we get started here, we have a short demo that we want to get started. Are we we're starting with a demo first? Yeah, I know it's incredible. A little bit different. Anyway, I'm going to go ahead and click a button here so that you can see something start working. Okay, so we're not talking about them or just firing up the demo. We're just firing up the demo and then we'll talk about the demo later on in the talk. All right. Here we go. I'm going to trigger it to start running. What you see in front of you is just a simple pipeline. Cool. It looks like we have a 50% chance of this demo working. Yeah, and look, it's actually it's it's always 50%. These are works or doesn't work, but it improves it improves all the last one are successful. So I'm optimistic. It's going to be great. Oh, well, you know what? Let's let's drop a bomb here. Let's let's ask the big questions. Why should developers care about that? And as I mentioned, I come from developers perspective and I remember what developers care about and they care about code, right? Well, as a developer, our definition of done is we wrote some code. We tested it. Quality is very important, right? And then we kind of show it to all the shareholders. We we we showed what we wrote to a product manager to a project manager, obviously to QA to our team lead who checked for and they they check for for all the stuff right if the features are there, non functional requirements, the code style and they did the code review the QA people chat for quality and and once all of those stakeholders approve and say it's good. This is where we are done. Our work is done. It's Friday. Let's go drink now. The Ops people they're going to deploy it. If our code is good, it will be deployed just fine. It will work just fine and we wish them the best of luck, but it's really not our job to do all the DevOps stuff. Kubernetes infrastructure is code servers networks. This is not our headache. Why should we as developers care about DevOps? This is exactly the perspective I've had as a developer for most of my career. We were pretty siloed when it came to the actual deployment piece. I was never on call for a deployment or anything like that. It's very different atmosphere today. My most recent time in my career I was on a DevOps team that really opened my eyes. I did not realize the pains that Ops takes to deploy the applications and once I started learning several you know things some of that process, I realized there were actually big decisions I would have made differently about the writing and the organization of the app had I known all of this pain that they were going through. So I think this is a good move in the industry. Having these silos where you just you know throw code over the wall and then just let Ops handle it. I think it's just not the best way to do things anymore. Well, I understand how it helps the Ops people their life is much easier when we are there with them but how it helps helps us. So let me let me pitch in from a developer perspective. So I think if you if you look at this from from a very selfish perspective. What we all care about is is how much we get paid as developers right? And if you look at the Stack Overflow Survey which was recently done you know full stack engineers and back end engineers and everyone gets paid you know quite well. So I think even amid the current pandemic we're very lucky to have technology jobs. But DevOps engineers and site reliability engineers are actually even higher paid than development positions and I think this this reflects the the number of languages and technologies the adding in additional cloud native deploying to cloud environments or hybrid environments and learning all these skills and getting really good at it actually is a way of improving your career and and getting paid more which you know that's not too bad. Not too bad at all. What is your take on it? Why developers should care about DevOps? Well, I agree with all of you actually first and the most interesting reason for me is as Steve mentioned this set of practices that were DevOps and Agile are here to stay so outside there if you're looking for a new job a better paid job these are the skills that you would have are the concept that you have to know and use. This reason is because there are good practices in there there are good practices that will make your life easier. And when this whole concept of the start it probably the best book that describes the ideas behind it is the Phoenix project. If you have started with this topic in a more serious way you probably have been recommended this book. I do recommend this book. First of all, it's a novel so it's entertaining. Second of all, it's going to give you a perspective from management. When this book was written, it was the early 2000s and at this specific point in time all companies realize that they are they were or they are technology companies regardless of the product or services they provide. So and Steve mentioned another important thing right now with this with Kobe the pandemic. It is important that we react fast. We adapt fast with the liver fast. So this idea of going really really fast and also providing quality making the the user happy or providing new functionality. It's really important. So in this book they describe exactly what Melissa said. The miscommunication of the impedance between different silos inside the company that were preventing this particular company from delivering the software or whatever they needed to deliver in time exceeded costs and sometimes not meeting the expectations. So this is a really good book because for us developers it provides us with a glimpse of what were the management concerns. And when you start learning the language of management you can also talk back and probably you as a developer I'm going to have my list of metrics see we reduce our where whatever for 20 percent and then I go with management or stakeholders and they are like but what does that mean. Well now you have the language of management you know the concerns and you can translate whatever you are improving into their language. There's another book and it's the companion I'd like to call it the companion of the Phoenix project is the unicorn project. This is the same story again a novel but the main character it's a female developer and it's going to describe the working environment and also build the things that can go wrong in a software development team. So it's also interesting because it provides you a nice depiction or sometimes not so nice depiction but you also see that the principles or the concepts or the practices that DevOps is proposing will help you in your day to day life. So you're going to be at less annoyed and hopefully a happy developer. And that's that's a great review of those two books and I would like to suggest if you wish an alternative relationship between them and it's interesting both in terms of timing the unicorn project released 10 years after the Phoenix project and it's interesting to see how the views on DevOps progressed during this time. If we look at the Phoenix project we see how the story of DevOps is actually a story of Ops people of IT people winning with DevOps. You can see how the main protagonist of the story Bill Palmer he is a VP of IT Ops and that's like a completely Ops concept and his DevOps kind of invention is the solution to the exact problem we spoke earlier of how the developers are done and they are out of the building and now go ahead and deal with it and good luck. And this is kind of a view of how DevOps came to be solving the problem of IT Ops people. Now 10 years later we see a completely different picture. The unicorn project as being an alternative timeline if you wish the same company the same problem now solved from a completely different perspective. Maxine the lead developer as as you mentioned Israel is she solves her problems of a very bureaucratic rigid and slow organization for her needs her developer needs again by adopting DevOps. So it's it's the same win is the same company is the same problems. It's the same solution in the end of the day but it came to be from a developer perspective and I think this is the ultimate answer to the question that I asked in the previous slide why the developers care about DevOps. Yes, they didn't have to care in the beginning. It was Ops people solving their own problems but this is not to anymore. Nowadays the problems of developers the autonomy the mastery the purpose of developers are also can be solved and can be elevated through DevOps. Yes, we have to keep in mind what is the big picture in the software development cycle and of course we are one part we are I will argue the most important part but that's me but we're a part of this whole process and even if we speak different languages we express our concerns in different ways. We still have like main things we have to follow and this is something that DevOps practices and actual practices are very focused on for example speed quality security having enough information feedback early feedback early failure. Whatever you need to go at the speed that you need to go if it's fast super fast. It it all it all depends and also why do you use what metrics what are the benchmarks what are your requirements what are your service level requirements. Those are the information that you need to pay attention when you are in any part of the software development cycle or process. So these are still the commonalities between DevOps well between developers within Ops between management. So it's important and it's very useful at the end of the day. Now let's talk about the development cycle and the development cycle changed in the last years. You remember how it always been three step diagram because everything in IT is three step diagram but that those three tabs were different. The three steps that we kind of were used to is write code build code and then deploy code that was the three steps and today we still have three steps but they are different steps and this is what you see in the screen the sourcing part of finding the the building blocks that you are going to integrate in your application is not a new concept but it matured to be a very important part of our development life cycle and why is that because nowadays 80 to 90 percent of our application is taking someone else's code is that the open source frameworks the open source libraries that we use and most of our code is just glue around it and just an integration code between those different libraries. So the process of sourcing and by sourcing I mean finding validating bringing physically the code to where we can use it and then caching it so it will stay forever and won't disappear under on figures. This is something that is critical because 80 to 90 percent of our application will be this code. So sourcing is a very big deal. How do we find the libraries that we want to use? How do we know that those libraries are good good in terms of they do what we need good in terms of they're easy to use user experience good in terms of they don't have bugs they don't have security vulnerabilities and good in terms we can actually go ahead and use it. How can we validate all those things? So how do we know if the user experience is good? We need to ask our peers and here is where the problem of you know we never considered it this way come and kind of bites us if you look at Maven Central as being the primary source for your dependencies in the Java world you have no metadata that can help you make this decision. It's a file server that you can grab the artifacts and use them. You don't know anything about them. You might write some you might read some blogs you might see some conference talks and then you might decide well I heard that I don't know JUnit is good. Let's use JUnit and then you go to Maven Central and find JUnit but the aspect of ratings of popularity of user experience is just not there. We need to do our own research and this research can be long can be time-consuming and even can be wrong right not everything that has hype around it is necessarily good for you. So what I'm saying is you need to invest your time you need to do your research you need to be able to smartly pick what your application is going to build on and this is where we talk about what's good for you in terms of usability and maybe even quality etc. Now the second problem is how do you validate that they don't have security in your abilities and this is where tools like Jeffroga X-Ray and Aqua and Sneak and White Source and Black Duck and there are tons of other tools that can help you validate those dependencies some of them are more rigid they won't let you get this dependency to even to your build system or to your artifact if they have they have if they think this library have a new ability some is more relaxed they will let you play with it and then fail in the build time but in the end of the day automating this security concern is very important another concern is license compliance is this library even allowed to be used in your organization maybe it has a viral open source license and it means that if you are going to use it then legally you have to open source all your development sometimes it's acceptable but most of the time it's not because you probably work chances are you working for a profit organization that wants to make money from the software and it means that it has to be not free and not open source right that's like the default use case if you wish so all those questions they're there in the sourcing and doing your research for what you cannot automate and automating what you can is critical now the second step is the most if you wish to reveal one this is what we do for years we write code we build we test we validate our code this is all we know how to do we use our build tools maybe in Gradle, Bazel whatever you use and then you just compile the code you get jars you start promoting them through quality gates in our pipelines we're going to talk about that in a second and and then you're done right you have your artifacts ready to be distributed already to get to your users and then when the third kind of the third part of your three steps diagram kick in and that's the distribute what does it mean well distributing is bringing it to the runtime what is the runtime everything right so back in the day of like pure Java development in distribution it was you build a war file and you put it in your Tomcat or your other service container or you build your application archive and then you put it in what was it a web web sphere get forbid or or web logic or or or whatever it is right and but those those days well those scenarios are still out there but there are other scenarios that is that are reason obviously here at KubeCon we are going to talk about and we are going to show you how you distribute to your Kubernetes cluster and this is kind of the cloud native use case but there are others as well how about edge computing how about IOT how about for computing or even other people's computers you have now a distributable application a Minecraft that people download and play this distribution scenario is also as as distribution scenario is as as others so you need to think about that and then you need to pick tools that are right for distributing for what you want to distribute in our case again it will be just distributing to your to your Kubernetes cluster and we are going to talk about that yes I just wanted to mention very pure totally right that's one of the things that are very active of DevOps and the practices and agile that now our code databases have exploded our deliveries are also exploded because we have a more fragmented market it's very typical for me to go to a client and suddenly they ask you to deliver in different flavors the software that you are preparing and with different requirements so now you have literally a combination matrix of things that you have to prepare configure test and promote in different ways so it's this practices are really going to help us do a better job so it's important of course okay and I'm going to I'm going to dig in a little bit more on one of these practices which is how you manage your code that you're doing for developing and I kind of looking at the history of software development there have been a bunch of different approaches to this problem and just just a quick Paul from you know of course our audience we love to hear from you as well as the other speakers here so what was your first version control system how about you Iksha you want to go first I'm very lucky I've started with Mercurial so yay okay that's that's yeah so that's a very modern version control system for again I'm obviously older maybe not as old as you but pretty old I started with CVS and clear case or should I mention in the terms of your progression here from apes to people from clear case first clear case and then CVS yeah clear case would be the the ape here have Melissa how about you Mercurial and then SCN primarily is what I started with okay so so Mercurial and and get which I think what a lot of people are using today are actually considered the most modern version control systems but if you go back in history what we started with were systems like SCCS RCS clear case falls in this category to commercial systems tend to lag behind open source solutions and they were they were locking version control systems where you you'd actually lock the files you'd say this I'm working on this file nobody else can touch it you would lock the file and then you make your changes and then you release the file when you're done making changes and of course this is this is perfect because you don't have any conflicts now now CVS and subversion of course this doesn't work for large teams if you had a refactoring and you change something across the entire code base like a package or something that is in every file you would you would lock the entire code base and nobody else will get any work done so CVS and subversion fix this with optimistic locking which essentially means that you let everybody change the files whenever you want to and if it turns out that two people change the same file you go back and you resolve a merge conflict you you try to merge and hopefully you can automatically resolve the changes sometimes you can't automatically resolve changes and you have to manually resolve a merge conflict and this takes us to the the third generation of verge control system so CVS and subversion you are always syncing with a server and get in mercurial introduced to the distributed concept where now you're syncing with the local repository you have your own version of the entire repository locally actually kind of kind of ironically this is this takes us back to the early verge of control system so so RCS and CVS were actually initially designed just to be used on mainframes and so they also technically consider the repository to be local but what they were missing and and what of course server-faced version control systems are missing is they they're missing the capability to sync your repository with other repositories so distributed verge control systems let you take your repository and do pull and push requests against other repos either pure repos or a central repo and give you a lot more control and flexibility and scalability on your verge of control system and if we look at the current trends in this of course Git has taken over the industry and has become the de facto standard it's very fast it's very efficient it does a great job of merging and resolving conflicts and it's basically pushed other systems like you know CVS mercurial and even subversion to be of very low usage now in the industry some folks are still using subversion for commercial purposes but I think most of us have moved on and move to get and accepted that distributed version control systems are the path forward and when you when you look at how distributed verge control systems work you have your own working copy of the code which you're committing and updating to a local repository so all that exists on your your machine and then when you're ready to to push your code then you push this to the remote repository so you'll actually sync your code with a central repository and other people will pull and push from that central repository and a lot of modern workflows like GitHub further introduced the concept to pull requests where you ask people to accept your change and then they'll pull from your repository to integrate the changes which you've made which actually makes it safer for for merging and code control because you don't have random people pushing changes which break the build and then we we all get well this is favorite analogy a a a donut day when the build breaks or there's various other ways of shaming people as well but I think I think rewards are better than shaming so donuts are a nice reward so distributed verge control systems make this much easier workflow wise to avoid having your entire build impacted by this and I think the de facto standard for distributed verge control systems has become systems like like git and specifically cloud systems like github so in the Stack Overflow survey github was the highest used tool by developers across the board can you flip the next slide Melissa the of course you know we are using Slack and G-RN other tools but github is used by over 80% of the developers and then there's a bunch of other tools going down the list which other folks use but I think that when you're looking at how you can manage your source control work with large teams and efficiently create the source code pipeline to deploy to production it makes sense to start with a solid foundation with a distributed verge control system that kind of underlies your entire DevOps pipeline so and now after we spoke about sources let's see how we move past sources and as a I mentioned previously this part we know really well right we take sources from our version control and then we build them and we can wear them to binaries now what are those binaries well depending on the on your stack depending on the programming language that you use with Java we obviously compile our Java sources to the class files and then pack them into those JAR archives and then alternatively or on top of that the war the war files where the AR AR our archives whatever in different different other languages it might compiled two different files or not compiled at all but in the end of the day we will pack whatever those files into some kind of archive it will be an archive or of our JavaScript files or archive or pythons Python files but in the end of the day the CI server takes sources and omits binaries and once it omits binaries those binaries go to a binary to artifact repository and obviously majority of us here work for JFrog so we would recommend JFrog artifact repository that you probably heard about the artifactory but there are others that are definitely worth your you checking them out and for the matter of our story it doesn't really matter what matters is that you have your artifactory repository which is the where your pipeline actually happens and this is where the pipeline kicks in what the pipeline actually means that you take the artifacts that you have your artifact repository you deploy them to the right environment for a certain maturity level so for example in step number one you will deploy your integration level artifact to your integration cluster and you will pack them obviously in Docker container also in CI server and this is you you will you will deploy the container to your integration Kubernetes environment this is where you do your integration testing and if the quality requirements are hit if you are satisfied with the quality of your artifacts you promote them through quality gates to another level to another staging area and what does it mean promote through quality gates in this more simplistic way you will just move them from one repository in your artifact repository manager to the next repository you'll have repository for integration for system testing for staging for production and your movement through quality gates will be promotion of those artifacts from one repository to another in next repository now we promoted from integration repository to system testing we again deploy those artifacts to a system testing cluster Kubernetes cluster and our example in system testing Kubernetes cluster we start doing our tests again other tests now it will be system testing but also can be security performance anything that you care about in terms of what you can automatically test and again if the tests are failing well you just discard those artifacts but if the tests are successful and the quality requirements are met you're going to promote it further to staging and in the end of the day to production so our pipeline is you build the artifacts you build jar files from Java and then you build docker docker image from those Java archives and then once you have this image once you have the artifact you go ahead and you promote promote promote promote through the pipeline so just to reiterate on some of the points that Baruch has made about sources I want to drill down into what actually happens at the very beginning which is the building of everything they're not there may not be many times in your career that you're involved in a Greenfield project where you can start from scratch and you know where everything is coming from and you know how everything is built and how everything is put together in fact most of the time of my career personally has been on already established software teams and my responsibilities were around maintenance of an existing product or service out there there was an existing code base I may be responsible for fixing bugs that kind of stuff given that it's so super important when you're in that situation you've just joined a team part of your ramping up process know your build get to know your build get to know the ins and outs of it you'll be able to work better and more efficiently if you know where everything is coming from and how everything fits together one specific story I have for that early on in my career I joined a software team and it happened to be a project that was pretty large there were multiple Java modules all over the place at the time I was using Eclipse and in my in my IDE and I had everything you know pulled in and I was really excited to work on this I was pointed to a particular module a particular bug that I needed to fix so I got the code up and every time I made a change I would turn around and run maven clean well after a while of doing that it's very inefficient way to work because I didn't understand what was going on behind the scenes at some point one of my colleagues came over and was kind of watching over my shoulder to answer some questions and noticed that this is what I was doing and figured out immediately it was because I had not taken that first step to really understand how this project was put together I didn't understand how my sources were coming in how they were being cached how every time I did a maven clean I was actually blowing away my cash and just spending a lot of time repeating work that didn't need to be done come to find out you know once I learned how everything was put together I could focus on the module that I was working on and only you know build that module over and over again as I saw fit and I could save about 20 minutes of time every time I would build so this is something important to know and understand even if you're an ops person they need to know this too because building your pipelines out you need to understand how long your builds are going to take if you repeat building pieces every time over and over through your software cycle through the pipeline it's going to take forever so there's a lot of efficiency improvements that can be made there the other part of this is where are your sources coming from so now that we're in this new world where containers are really popular this is the thing to do you build a Docker file you put in you pull in a base image you write some stuff you you know put everything together and now you have this image that you can send out there and provide as a service for everyone well there are some specific things to look for these base images are actually coming likely from default Docker hub unless you are specifying a particular registry that you have which I would recommend getting a private registry because you don't know how often these images are going to go away your base images may disappear I was on a project once where we were we were all relatively new to Docker it was pretty new for us just getting you know getting it built and up and running was was awesome we started the very beginning definitely the the documentation on Docker so having something to start with was great however as time went on I came to realize when our build started breaking that our base image was coming from a repository that was managed by a contractor who was no longer employed by the company well eventually that quit working eventually that base image disappeared they were no longer responsible for it probably had assumed that we had already moved it into a safe place that we managed obviously that didn't happen so there was some communication that had to be done to retrieve that base image so that our builds would stop breaking that's one thing to look at another is you don't know necessarily anymore where the if you haven't paid attention to what's going on inside your Dockerfile you could be pulling in external resources that you didn't realize one example of this was I found a place where we were actually pulling in an external script and then that script was being launched during the build not recommended to do that at the very least we needed to retrieve that script from wherever it was from and also store that in a local repository so that we had access to it always and it wasn't constantly being pulled you know over the over the internet to pull that thing down the other was knowing what that script was actually doing there may be security vulnerabilities being added that we weren't aware of there could be licensing issues if there happens to be third-party applications or installations happening during your builds that involve installing products that do not have a license that is good for your company so yes so make sure you understand your build that is the whole point of this slide here also note the bananas I refused to use shipping containers to represent containers anymore and any talks or any slides and I challenge anyone in the future to start moving away from shipping containers so there can be there can be two options here first of all we can start a movement of using bananas for everything container-related or we can open a competition for whoever comes with the next best thing after shipping containers for talking about containers I wrote for bananas I like it well I like to work a lot with containers because when when we first started in our projects I think the first time we use it it was for developing purposes for example we in the product that I was working on we need to support several databases and instead of installing in my local machine Postgres and MySQL and then later on polluting my my system folder now we had the opportunity to create the image or download the image in if we needed to create the specific Postgres version with our own configuration file even better so that's why I support totally Melissa's idea of having your own at the at the company level your own repository because you can do the things like this later on when you're using them for integration test you may want to customize it because integration test usually takes a long time so there there are expensive in bootstrapping the depending on how many services you have to to test remember integration test is testing two components you don't have to test your entire application you can of course do it do end-to-end tests but integration test you can have as many different configurations between components and sometimes because of this you want to create as different customized versions to make this easier faster when I started with containers and and I heard package ones deploy anywhere I know there are different circumstances I remember oh this is so like the Java promise write once from everywhere so it was sold what was not to like yeah but I think that the promise of Java write once write write once run anywhere actually didn't didn't fulfill and it didn't fulfill because dependencies because our application cannot really be isolated from everything around it and this is why even in Java works on my machine a joke is actually a reality right I even with Java it might very well be that something that runs on my machine actually breaks in in any other environment obviously including production and and I think containers are really the next step of minimizing the impact of external of external dependencies it's almost like someone said works on my machine and then we said you know what fine we will just ship your machine to production and that's exactly what and that's exactly what what what containers are we take even more of those and also those dependencies pack them even more with our application and this is how we ship so think about package ones deploy anywhere when obviously run ones run anywhere is a part of it the other thing that attract me it was a change of how we thought multipolar and has this snowflake server versus phoenix server and it's the same metaphor as the cattle versus fetch because in the past we had the servers that nobody knew how they came to be nobody knew how they were working nobody wanted to touch them because you cannot reproduce it so they were like like so dear even if they were not working correctly nobody attempted to change it so now we have to think in a different way everything can die at any point even we want to kill them at some point so you have to start changing how you prepare things how you package things how you make them run as they should how do reproducibility starts like being at thing a really big thing so yeah containers thumbs up all right so in this new world of containers and everything in the context of delivering your application choosing your tools can be pretty overwhelming and there's a lot of stuff out there that you have to learn now as a developer in order to make this process efficient and get your stuff out into production so something that I really recommend is make sure that you understand as a developer and as your team that you understand what you need and what your goals actually are not just service-oriented but business goals as well this means involving representatives from other parts of your company to come in and help to determine what is most important for you the first thing you need is the list the list of priorities your you know is it fast is it efficient is it reliable should it be you know more economical these are just examples of of goals that you have but make sure that you prioritize these so that across the company you're all on the same page working toward the same thing the same goal you won't have pockets of the company wondering why on earth are developers doing this when they should be doing that these are important things to make sure that you know up front this picture is of my car this is my actual car sitting in my garage right now this is really is a good example of what I'm talking about especially for legacy systems this is a legacy car it's about 18 years old and in Colorado the tires on this car right now are likely more expensive than I could get selling this car so when I got into a little accident goodness more than a year ago now I got a little dent on the side of the car and it was definitely not worth it to me to take it to a body shop and get that fixed instead I went to Amazon got a Band-Aid magnet and put it over the dent in my car so sometimes this might be the right thing to do for some of your legacy applications just because that particular thing is not a priority for your business at the moment so this was something I introduced to me recently and it was a value stream that your IT value stream as in the context is applying it to your entire pipeline your software development pipeline and this is interesting to me because so much of these things we these metrics and these values we place on the final product the final product that's running in production but what you also need to consider is every step that you took to get there it would be it's really valuable to also come together as a team not just developers but with representatives from other departments as well come as a team and decide you know which part which steps are we taking that's your your first thing whiteboard what you're actually doing to get your software into production and it doesn't have to be perfect and it likely won't be but you need to have that documentation of the actual steps that are taking place you don't want to get in a situation where only one person knows step three of the process they're sick on the day of you know deploying a fix to production nobody knows what to do you want to make sure to have that documentation and I don't care if it's a manual step write it down once you have this documentation in front of you then you can start going through each one and find out which which steps are the most costly for you right now as a team which steps seem to be causing the most problems for you which steps can you improve one one idea is this was also something introduced to me recently it was the whole idea of one percent improvement pick one thing pick one thing in your development cycle and improve that one thing this this is a good transition this was really brings back some memories for me how do you deploy for me my first dev ops team and my first experience realizing what ops goes through and actually deploying an application this was amazing for me ops I was it was a smaller team ops and dev was thrown together we had very limited training at the time other than just learning the word dev ops and learning that's what we're going to do now and we started the developers on the team started learning the process of ops and found a lot of complexities and the deployment that we had no idea what was going on one example is just opening the right ports making sure the right right ports are open especially you know now that you're using containers make sure that you're exposing the right things it was a big deal if you got that wrong there was an entire other team that dealt with infrastructure and if you needed to be able to get traffic through a particular load balancer you needed to know that in advance so you know deploying a piece of software where we have fiddled with that that was unacceptable and unfortunately in the particular case it might take a week to get that change in place and ready so again the importance of communication between teams and understanding the pains that our teams are going through is super important for a developer you can't be flippant about that this particular project I was working on there were a lot of little shell scripts running around on people's machines in various places various folders on a shared drive and these little scripts were what were used and they were patched together to actually deploy this service to various locations whether it be a staging environment or a production environment and every time a deploy was done on this particular service now this was back you know you always hear deploy fast deploy often this was not a service that follow those rules a deployment was a big deal we did not deploy very often and it's a good thing because every one of those little scripts you had to find them and fix the little hard-coded details that may have changed between each deployment oh and you got to make sure that you make the script you know that it is working for staging and for production make sure that you know all the URLs are correct and blah blah blah we had a situation where accidentally a URL for a test AMQ server was put into a production environment so if you can imagine this was something that we experimented with we didn't know it at the time but we were creating our infrastructure as code we started putting our deployment scripts all of those little scripts we got them together we put them into source control and we started versioning them for each and every deployment so I don't advise going in and you know making hard-coded changes every time however that's what we started with and it was a lot better for us to be able to do that quickly get all of that stuff in source control version it and then be able to apply a variable based on whether it was a production environment or a staging environment that kind of thing and then we were able to deploy much faster and we avoided those silly mistakes of putting in the wrong URL in the wrong place the other advantage of doing that is that now you have the option and capability of going back to a previous version if you need to for whatever reason it's going to happen nothing is ever perfect in life at some point you're going to need to roll back a release as we get better and better at this rollbacks may not be as common as roll forwards but for a while I think you know you still need to have that safety measure in place until you get a lot of these practices you know really down on your team great yes I totally agree with you Melissa you have to externalize as much as the knowledge as possible reduce the magic happening because life happens and sometimes the person that knows the magic it's on vacations and then it's chaos that you have a bottleneck a dependency that it's not right or not needed checklists make sure that you have your checklist all over the place and people know where to find them another point I've been in situations where we didn't have the appropriate environments to work with to begin with you might be able to build it on your local machine you might have external resources to use to deploy something if you needed to just for testing maybe there was a staging environment certainly there was production but don't skimp on your environments this is probably the worst thing you can do for your development team I mean you know we need a place developers need a place to test things try things out experiment with things there also needs to be available for the benefit of everyone involved you need a staging environment you must have a staging environment generally this is an environment that matches production in every way possible but it's the first environment that you would deploy a new version to when you're ready in your cycle to deploy a new version very important to have if you want to test on production good luck to you you're likely going to have some pretty miserable user experience doing that and a lot of downtime if you can imagine also a test environment being able to use your automation that you've built to deploy to various environments, deploy to a testing environment Michelle mentioned before that you know integration testing can be pretty expensive I was in a situation where in order to an expensive in time specifically is what I was thinking something that we would do we would add to our unit tests you know the like interaction with the database for example we would just use an in-memory database you know because it was faster but the problem is is the in-memory database is not the same it did not have the exact same syntax as what was used in production so it's pretty ridiculous to have all of these tests and they all pass and they all run and then put your service into production and everything breaks because of a silly you know SQL mismatch or syntax difference so use test containers containers are everywhere containers are good for you you can use them for your integration testing and test containers has a lot of available options for you you can you know grab your version of database that you're using and be able to deploy that in an integration environment and be able to use that you will catch things like any API problems you'll catch any you know syntax problems any differences make sure that your integration environment actually matches your production environment and just avoid that mess so Ishelle likes test containers and she has a lot to say about them and why we like them so I'm going to hand it over to her well for this closure I really like integration tests one thing that Martin Foller said once is that integration tests you don't have to test the whole system sometimes an integration test is testing the communication or the coupling of two modules so you can break down your integration tests as you see fit and if you increase the richness of this and the different configurations of your test is going to be even better for you I love this project because I said I started with containers really early on so we were using containers for developer for developing like for example the databases I needed to have Postgres and MySQL because the product should run with both of those databases and so you have your containers like that and you're happy and etc etc when you start doing integration tests you have to bootstrap all these resources and run them turns out that at the time where I started doing that I'm a Gradle most of the time girl Maven I also work with Maven but at that time we have different plugins for Gradle and they were fantastic they allow you to bootstrap and automatize everything but the functionality that they provide were very limited for example you cannot have randomized ports so not running the test integration test concurrently which is a bad thing so at that time there were so many limitations when I discovered test containers and I saw what they allowed you and how powerful and flexible they are because they already have like pre-created templates for databases even with their own weight strategies very targeted to a specific type of container database or different other tools they do have one kind of operation it's called Ryuk that makes sure that your containers are well behaved this means that once they are running well they start up they run and then they are killed even if your tests are failing because in that years back I would usually get the call from the IT guys Excel you crash the machine because you have like 1,000 zombie docker containers and I was like what? and it was because sometimes the test failed and we didn't disposed of the resources correctly there were some workarounds like before running the test you kill everything in the machine but it's not okay so if you need to work with integration tests you either the simple ones or the very very complex ones I totally recommend you to have a look into this project it's going to work your while and it's easy to set up and as I said it's flexible there are already some pretty fine types of containers or test containers and you have the generic one so totally recommend it all right so we're at a KubeCon conference and I assume there are a lot of developers in this particular audience since we're speaking specifically to job developers although this would apply to any developer really what about Kubernetes? in my personal experience as a developer on a development team before I was on a DevOps team I had little exposure to the deployment itself my life was all about just you know I was in my IDE fix things check things in I was done went home for the night and slept well now that I'm on a DevOps team no I'm just kidding now it's definitely more of a concern for me like what exactly is going on in deployment world and along with the automation now that you have all of these containers and everything the bigger your project is the more likely it's going to become difficult to manage all of that Kubernetes is an awesome orchestration tool if you may not get into the nitty-gritty of cluster maintenance as a developer in fact you know it's possible you may have an infrastructure team that deals with you know upgrading clusters that kind of stuff but it's important to understand how it works there are also managed solutions so once you understand how it works you might consider knowing knowing the big lift of managing Kubernetes clusters you might want to consider the managed solutions available out there all of the big cloud providers of them Azure, Google they're AWS they're out there and are pretty they're making them easier and easier to use day by day if you are curious and you want to know more which you should as a developer there are a ton of good resources out there now lots of articles, blogs being written about Kubernetes this one is my absolute favorite it was it's provided by Azure called 50 days from zero to hero with Kubernetes each of these it sounds overwhelming it takes 50 days to learn Kubernetes right but each of these little points you know there are a single a blog or a little video it's not going to take you all day you know or all week for for each of these steps here day one is my absolute favorite in this process here I actually picked up a hard copy of these children's books it's all about Fippy which is a PHP app learning how to live you know in Kubernetes land and it's basically a child's you know written as a child's book to teach you the basics of Kubernetes and there's a cute metaphors in there some of them might be a little forced but that's okay there's always a page that is more technical that describes you know exactly what they're talking about if there is any question on how to pronounce kube cuddle here it is definitively on page 12 of this children's book this is how you pronounce kube cuddle another way to learn Kelsey high tower has a really good GitHub repository called Kubernetes the hard way this really gets down into the details of Kubernetes and forces you to start at the beginning so that you learn how actually everything is put together and how everything works highly recommended directly from the read me here Kubernetes the hard way is optimized for learning which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster so definitely go here to learn once you've learned these are some of the things that you're going to need to consider when you're building your app or when you're modifying an exit existing app to behave the best in this new environment in this container and cluster environment. So this comes directly from a white paper available at JFrog I have the link here these are some of the tasks that you need to consider some of the questions to ask for example how many times have you witnessed a server falling over because the log directory filled up it's happened to me. Yeah, it happened to everybody. It's and it's just a symptom of you know you want to get something out there you want to get something working and we have this tendency to focus on best case scenarios and happy paths. So these looking through these questions as a team kind of gives you a little bit of an idea of what can potentially happen in a production environment things that you should be concerned about and things to optimize on. I like this one data persistency. You know don't don't persist everything persist what you need and you kind of defeat the purpose of having a container is you know make sure you're persisting what you need of course because your container can die and go away but make sure that you you aren't overkill here. Termination signals this is one I personally had a problem with it was something I added to one of our services because there really was no graceful way to shut it down. There was no endpoint it was just you know kill it and be done well if you killed it while it was actively being used in production the only thing for your user to do is just you know try their process again. Unfortunately this particular service and it comes from the from the pet from the pet concept right that you shouldn't kill your application instead you try to fix it and try to keep it alive as long as possible. One of the big changes are exactly that that you now need to be ready to kill your application and should go down very fast and this is why you need to kill the kill monitoring. Exactly and gracefully our services some of the processes were very lengthy amounts of time so having you know forcing a customer to redo their stuff was pretty painful. And there are many reasons why you'd want to kill you know you may want to you need to work on an upgrade or you need maintenance of some kind consider you know how you've deployed your cluster you know if it's possible to only kill you know part of your cluster only some pods update those and then reroute traffic. This is something that is better for you to do than just to kill everything and be dead in the water. The idea you know keep stuff alive like what Brooke said. Yeah this is an interesting idea that I love from containers instead of fearing that you start like thinking it's going to happen sometimes I even going to cause it. So I need to think about what are the implications and the consequence of that. So start thinking like that. Exactly and it is a different way of thinking. It's exciting when you get something up and running and you just want to push it out there but you really need to think of these scenarios you know when it doesn't work and we're trained to do this in our you know our code when you get down to the details you know we're trained to look at a function for example and make sure that it behaves the right way and we consider all the failure scenarios but bigger picture failure scenarios sometimes get sometimes those get overlooked. So definitely keep these questions and these tasks in mind when you're prepping your app or improving your existing app for the Kubernetes and container world. All right so let's go and see what our little pipeline is doing. Oh and we have so much stuff to discuss to discuss now over our pipeline. And yeah that's that's oh it actually worked look at that. Look at that. I'm I really don't know God's are with us. Yeah. Yeah yeah never happened. I mean live demo and it worked. I really like the visual this provides at the actual pipeline when you know the just the term software pipeline can sometimes seem like magic it isn't it is not magic. It is just the steps that you take to progress your software down the line. So in this particular pipeline this is a development pipeline. We have a front end application. This particular one is in PM we're using in PM packages to build our front end. Our back end is built in Java. In the middle of this pipeline we actually build a container with these two resources. Then we we publish the container we put it in an appropriate repository and then we we can run some tests on it here you know run it through integration test that kind of thing and when everything passes you promote your artifact promote your container image to the next step in your cycle which in this one is a staging repository. A lot of these tools operate in the same way this particular one integrates with various tools. This particular pipeline gets triggered with a commit to a GitHub repository. If either of there's actually three repositories here one is for the actual pipeline script which describes this whole process. The other one is the of course the front end repository and the third is the back end Java repository. Any changes to those repositories will trigger this pipeline to run at the appropriate place. So this makes it really easy for multiple teams to work together without stopping on each other too much. You know getting their latest changes that kind of thing a really valuable thing to have. Yama is I have a I have a question Lisa. So what I see here is that what we triggered in the beginning was the front end the front end build and that was like analogous to committing a change into one of the NPM files now and that that generated like run number 11 when you when we look at the back end when we look at the Maven build it is run number 7. Does it mean that the now the run number 11 the build number 11 will reuse the outcomes of the build number 7 in the back end. Yes and that's great because we don't need to rebuild the Maven project every time something changed in our front end. Yes. This will save so much time on on your builds a lot of time is spent rebuilding something over and over. And also this is something I've seen to that Baruch you've already touched on the whole promotion steps and going through your your quality gates. I've seen CI systems and pipelines like this where the whole container is built at every step. That's you know or when you're ready to deploy it's built again. Once you do that you've lost you don't have a guarantee that that's the same container or going to have the same behavior is what you tested you kind of you lost all your effort there by rebuilding something. So this whole process of promoting or you know moving the same make sure your artifact is built at the beginning of your pipeline and then as you promote to staging and production you're using the same artifact the same build. Absolutely. I think let's let's look at some sources now. Let's look for example maybe let's let's see the docker file or maybe we can start with the pipeline yet. Okay, we can look at the docker file. Here a lot of these tools are using this wonderful language called YAML get used to it because it's not going away you find it everywhere in ops tooling all over the place. This pipeline product also uses YAML and it just uses it to describe each and every step that we take as long as well as the resources where we get our source from our GitHub repos all of that. So this describes each step the packaging there's also some you know you can add conditionals I don't have a lot of conditionals in here but you can add conditionals something to consider what happens if at a certain step the build fails. What do you want to do really important to look at that failure scenario because otherwise you're going to end up with a lot of you know headaches fixing builds quickly because everyone stalled. So make sure that you have the you know good messaging alerts that kind of thing when a build fails. Mm hmm. Yep. And this particular product has a lot of built in mechanisms and and you know like Jenkins there's a lot of plugins you can use it's kind of nice if you have stuff built in it makes it a little easier to read but you can always you know go back to just shell scripting if you need to do something special. All of that is available to you. Yeah but let me just reiterate that this is just our example and we're here just to show you the concept of the of the pipeline and not necessarily this particular tool if you take Jenkins if you take Travis if you take Drone I or if you take Circle CI it will be the same principles maybe the syntax of the YAML file or if it's Jenkins probably be like their group in DSL will be a little bit different but but the ideas the principles of the pipelines are are exactly the same across the board. All right and you were interested in looking at a Docker file because I think this is where kind of everything happens right. This is where we see how we build all those parts together how we bring the Maven build it obviously all the Java developers are familiar with and the NPM build which doesn't matter how it works. It's all JavaScript magic but in the end of the day in in the Docker file this is where we bring everything everything together right. Right. So here's the Docker file what this file does is it builds an image using the sources from our repositories that we have we have some private repository setup. We grab you know our back end and our front end and then we put them together into our image. Now there is a lot that can be improved here in this particular Docker file as you can see there's there's quite a few hard coded values. Some things can should be passed in like for example you might not want to hard code your your registry that you're getting your artifacts from here. This is something that you might want to pass in because it's likely different per environment or per stage you know whatever stage you're you're building your image then. Yeah another another idea that can you can implement very easily and then have more flexibility is the versions of the dependencies that you use we use here 1.0 and 3.0 as kind of a snapshot if you like right we are going to re override them every time and this is obviously a very big practice when you go and build your production pipelines fine for demo but that's not how you actually should do stuff in real life. Instead you can use and this is a very kind of popular practice and it's good you can use the build number of as your version and then your your CI server knows the build number of the particular run can pass it as arguments to the Docker image and that will be interesting to see how we pass arguments to the Docker image and you see those are our instructions in line number two and three you can add another arc which will be your build number and then you can refer to your back end and front end components by those version numbers instead of hard coded versions and that will obviously work every time because the numbers change together with that and this will provide you with the consistency that you do pull the right the right dependencies for your Docker build. That makes sense and I'm sure you have strong opinions on this a lot of Docker files out there do not specify versions you can have base images that just has the name of the base image other artifacts pulled in and there is no version specified and that means that you're pulling the latest whatever latest means sometimes you might even see the tag specified latest latest is a pointer that is assigned to a particular version doesn't always mean that it's actually the latest so be careful when you are referencing artifacts make sure that you are using you know that you have your head wrapped around your versions and what you actually want to pull in what you want to build but you know what funny thing is it's not enough look at line for the from we specified version we say we want to take it from open JDK 11 and it looks like we nailed the version down but we actually didn't because Docker tags are mutable and the maintainer of this tag can decide in all in any time to change the content of what 11 JDK actually means they obviously have your best interest in mind it will probably be security patches and they won't probably build break anything but sometimes it might to avoid that what you need to do is actually not use what you have in line number four but instead use what you have in line number five and this is bring the base image from your own registry and when you bring it from your own registry this is how you guarantee that it will always consistently resolve the same the same build build image every time you build so this is actually even when you specify the version you need to make sure that you are the one controlling the artifact with this version and not someone else who can actually override the version whether from good intentions or or bad. Yes. I mean always aim for reproducible everything build construction deployment so be explicit and yes be careful of where you store how you store and take the right decision for your your space. One other comment I have about this particular pipeline it's very good to show you know an example that will possibilities what you can do but one thing I would do differently probably is definitely use create images separately for the front end in the back end because in this way you can really take advantage of your scaling capabilities you may want to scale your back end server differently than your front end application there's some advantages to doing that. So based on load or you know other performance metrics you may notice some improvements by scaling those differently. So one reason to definitely separate them out into their own services for example. Lastly this is just a single pipeline. There in my mind I don't maybe an ideal situation is that's you know you only have one pipeline and that's that's what you use from beginning to end you know and it's magic developer commits code and it goes flows all the way through the production likely though you're going to have multiple pipelines each team may have their own development pipelines you are also going to have multiple deployment pipelines you may have a pipeline that is specific for deploying to your staging environment for example pipeline that deploys to your production environment and those might be manually triggered you know if you don't if you aren't comfortable or don't have enough enough of a security blanket set up to protect you from mistakes you may want to schedule your deployments in which case you would have a separate deployment pipeline. Absolutely well this has been really informative we've learned a lot. I love that the four of us have such diverse backgrounds and like we all have some good ideas and thoughts about how all of this should be pulled together. This has been an amazing experience. I hope that everyone in the audience has gotten something out of this that you can take back to your teams. Let us know if you have any questions. Yeah, it looks like we have some time for questions and that will be that will be a great opportunity to talk about everything that we mentioned and more. So please hit us up. We're all here and ready to answer with that. Thank you very much and bye bye. Yep, thanks everybody.