 Hello there, my name is Stefan Staudenmayer. I work for Instana, the APM for microservices. Today we want to talk about shift left, what it means. What it means for you, how your company can accelerate. I want to quickly introduce myself, tell you what we do, what I do at this company, what we need in order to shift left and its benefits and pitfalls. This talk was initially planned for two people. I'm afraid Sanya Bonich can't join us today. She had to cancel because right now she's doing interviews. I've been a programmer before. I've been a sysops person before and it's funny that us admins got renamed again and again and again and again and engineers are still engineers. This is the idea of shift left. As the first full-time ops person at Instana, we've built when I came into the company very early in the startup phase. We've built a non-premises installer for our SaaS solution. For our bigger customers or more regulated customers and immediately a lot of testing had to get done and it made sense for someone with much experience in this, from both sides of the aisle, to then open the QA department. We're a manager now, but still like to crash systems for fun and profit. You can ping me on Twitter on LinkedIn or via email. I regard myself as pretty approachable. So let's quickly just get to what my company does in order to explain how I got into the field. So we are monitoring solution. Here you can see a map of hosts that a customer can run. These are little boxes on the hillside. Containing slices. These slices are then applications, services, databases, whatever. Even software as a service solution. Databases as a service or network load balancers, everything. They might get colored because they are facing problems like not only themselves, but also how they are being talked to by applications. And this is the part where the standard monitoring solutions diverge from APMs that also look into your custom software. Like we see here, someone is talking to a cash system very inefficiently. You might even run this on a platform like Kubernetes that has their own APIs that can monitoring software can talk to, which makes sense because over there you can see that you're either over and under provisioning your platform. It involves distributed tracing with a hands-off mentality that you're in the end only configuring credentials and such. Here you can see a robot shop, which is a public GitHub repository. It's a polyglot web shop environment that we've built with built-in problems that keep on reoccurring to showcase our monitoring solution, of course, but please go ahead and test it out. Send us PRs, we love that. Here you can see the rate of error on your calls. But we've got other stuff built in that you might want to check out like interactive map where you can see your services talking to each other and where you see when one service suddenly, like, for example, disappears or gets rescheduled or whatever, calls failing, which is oftentimes a problem or having always on profiling solution across multiple programming languages, which is also neat to see where you're losing milliseconds in customer requests, which is always nice. But try it out yourself. Go to playwith.instana.io in order to give it a shot. This is where my story comes into play. Shift left hit me immediately when I did a lot of digging into what it is that I'm actually doing there. I saw that the DevOps movement brought us not only people that call themselves DevOps engineers, which shouldn't have happened in the first place. We want to get rid of silos, which was the entire point. Shift left means that you involve people early in the process of software delivery. And we recognize that testing gets stuck and takes longer when you only test systems or test the entire release just before launching it. But let me pull an olive branch here and say, OK, let's go to this step by step. Let's take the pyramid that you see in every testing blog article that you'll find on the interwebs. I myself include myself in this. You probably already recognize this. So you have the number of tests and how much they cost in terms of time, in terms of the amount gets less when they get more time intensive. And this is when you drill it down to components. You can say you have your unit test testing your function. Say you don't use a strictly tight language and a function except the parameter and this parameter can be really anything, especially if it's a string. The more components you put together, maybe in the end you'll end up with a contract driven test or an integration test. In the end, of course, you have your browser clicking around in an automated fashion. That's now called an E2E test. But what many people leave out either to appear more adult or more grown as they might be is the manual part. This does not only involve things that are yet to be tested. This also involves things that are either hard or not only impossible but also don't really make sense to automate. We realize this when this team that for our software deals with all the authorization and authentication parts, builds the Google sign in with Google SSO solution. That sometimes in the background code changes that needs to be tested. But do you really want to write a selenium test that clicks around in Google's own HTML that can change at any point without them even telling you? Because why would they? This is the part where modern kids like a friend of mine works at SoundCloud and he said just digit, just throw it away. At this point, it does make so few sense to test it that you can just use your APM for it. Just either do it manually or just monitor the calls that you're receiving after Google acknowledged or denied someone excess. And you can go pretty sure that when no one was able to log in and you had 50 log ins per day on your SaaS software people that signed in with Google and you have zero the other day that either it's a weekend and everyone's on vacation or something goes wrong. But at least you can check it. Do it for yourself, try it manually. And the idea behind shift left that's in front of the getting people involved more early, at least to my opinion, is that you go away from the titles and see the tasks. Of course, you can get around hiring QAs when your engineers do all the work. You can also get around and this is my point, my personal point to extend this to all the other branches. You can get around hiring separate support people for your company to do the ticketing, ticket answering, going to the phone to answer customer problems. And if you take a step back and stop seeing QAs department, stuff gets done more quickly, I promise. Shift left and I don't mean the security startup because there's a security startup called shift left. Here what you see is a visualization of the software delivery lifecycle. You can find it everywhere on the web and it includes the separate steps that you're taking when you're building software and releasing software and deploying software, monitoring software and all of that from the planning code gets built. Artifacts from this build code get built, then it gets tested sometimes, probably, hopefully, in an automated fashion. You pack many of those features into a release. This release then gets deployed when everyone's happy with it. And here you can see I didn't say the QA department is happy with it, but everyone is happy with it because someone tested it and it doesn't have to be QA. Really what we try to achieve with DevOps is getting rid of silos. What we didn't want to have is people calling themselves DevOps engineers because it just confuses everyone. I saw it first popping up when Puppet and Chef use widespread use and every sysops person who stuck hard disks into trays before, who was able to write a bit of Ruby in order to automatically assign partitions to directories, was calling themselves a DevOps engineer. This was not the plan behind this at all. See, you take the software release life cycle and you do release testing. We also do release testing, of course, because when you're at the point where you have so many testers, that so much gets a second pair of eyes review sort of in the sprint itself, that nothing piles up onto the release part. You still need to test how all of these parts play together as a regression kind of testing. Of course, this also involves security. But from a security standpoint, it makes even less sense to do the release testing to only do the security testing at the release stage. So does it make sense today to do all of the QA testing at this stage? Well, digital mentality completely is what I'm trying to get at. So to get from there to here, maybe a bit drastic, you think, is called to shift left. How do I test software that isn't built yet? TLA plus. Yes, it's a troll. Yes, it's a bad one. But I had to take the opportunity. And again, this drills down to almost all the sections of the company. And what are those? It's not a QA topic, even though you find it a lot on QA articles talking about shifting left. You can also find it on security. The point that I'm trying to make is extended to all the other sections of the company as well. The worst point in time where a security person tells you that for a certain certification or a certain due diligence test, he needs all the IPs of your public platform, who depending on your automation might change at any point in time. But someone needs to do a security test and this needs to get done today. The worst point in time that this can happen is just about when you're just about to release something and half of the staff is on vacation. We had a very successful experiment putting a first level support person into daily meetings of a certain team. By picking up all the terms, googling them in the meantime and getting more and more familiar with the problems the teams is facing and the way that the features are actually built, the guy was able to answer way more questions without even bothering the engineering department. It's just one example. Of course operations. This was the entire point of you can find it in the name DevOps development operations. Getting rid of the silos maybe got you to the wrong impression that it only involves operations and engineering. Why did we say back then that this is a wrong idea? Because operations and software engineering run as separate departments means that not everyone has all the information required to come to technical decisions. Of course, technical riders can have way more planability when they know in advance which features are being built. This again drills down to silos. And this, of course, does not mean that everybody has to be on every meeting. By no means this doesn't make sense because then nothing gets done. Whatever floats your boat. Hey, IT, we have this feature. It's very inefficient. We can fix it, but this feature needs to be deployed by then and then for this one particularly important customer. And IT going, yeah, that's not a problem. We have those two boxes laying around that just need to get reinstalled. If this gets very, if this gets public very soon in the software release cycle, you might give them more time to fix their tech debt instead of just having to fix it immediately. Benefits and pitfalls. This is just a short version because I will do this talk alone now. Of course, the blockers are easily identifiable. You can steal them from the DevOps movement like identifying blockers and problems as soon as possible or less. Hey, Joe, do this now because I need it now, but also to get more stuff done more quickly. And just as a few examples, what can go wrong is you might get to the wrong impression that everyone has to be in every meeting. And this doesn't make sense either. Say when you put into put everyone of the entire organization into the planning meetings and then figure out that stuff needs to be adjusted as the feature is being built. When your technical writers already write the documentation or the block articles or already preparing conference talks and slides and all of that. And you still need to adjust things in the get go. Then you'll either make a clown out of yourself, but at least you see that something went wrong and you need to adjust. So drilling down the woven lines of the software delivery lifecycle into a flat line and seeing where it would make sense to involve more and more people. It's a thing that you should invest more and more time not only for every team, but also when you make adjustments in hiring, staffing or the splitting of work into separate teams. What I will not go into is talking about team sizes. Other people have done a way better job than I did in terms of this. So this has been my short yet filled talk. Please leave comments, ping me on Twitter, LinkedIn, write your comments in the chat. I will read them. I will answer them to the best of my ability at time. Otherwise, you can find me on Twitter. And I wish everyone a happy day. Bye.