 Thanks. So welcome to this presentation. Today we're going to talk about X-Way transformation to the cloud. So this is not a technical presentation. We're just going to go through the change transformation we did from an organization standpoint from the DevOps and the cloud, moving to the cloud. So we'll start with this high-level presentation and Vince will continue with going to more details from a DevOps standpoint. So anyone ever heard about X-Way? Raise your hand. A few people from GitLab, I guess. So we are not very well known, but it's a quite interesting company. Historically, we are a software on-premise software vendor. And we are specialized in integration patterns, such as new B2B, message file transfer, and API gateway. And interestingly enough, we're here to provide the tools for transformation of our customer. And we did our own transformation. You were starting in 2015, and we were going to talk about that. So what we do, what our product do, is about to give you an example, when you connect to your smartphone and try to get your bank account information, we've got all the API gateway to securely connect to your information. Or if you do a wire transfer from one account to another, we've got this kind of new application in the back end that does this transaction and make it secure. So we've got a lot of customer worldwide. We've been around for 15 years. And we've got nine out of the 10 biggest bank in the world. So we're kind of new spread out. And we've got to run your 2,000 employees worldwide. So our transformation started in 2015. And we onboarded a new leadership. And they made an assessment of where we stand. So what we realized is we were working in silo. We had a development organization, a QA organization, a product management organization. Take the rider or you name it. We had separate organizations. And having that creates a lot of lack of collaboration, a lot of frustration. And each organization has different objectives, different priorities, and going different direction. So this is related to a lack of trust, a lot of collaboration, and a status quo in everything we are doing. And obviously, we are not doing any kind of continuous improvements, and we are going nowhere. So in addition to that, we've been going through a lot of acquisition along the year. And we had a lot of product. And we were investing the same amount of money on every single product, regardless if it was an end-of-life product or if it was a new innovative new strategic product for us. So we had to change that. And again, it was due to a lot of acquisition over the time and the fact that we had this misunication and lack of collaboration in this organization. So the first thing that really kicked everything up was support and sponsorship from executive team. If you don't have that, it's kind of difficult to make a change in any organization. I think it was critical for us to kick that up. So the first transformation we did was around tooling. So tooling is not a driver to a transformation, but it's really an enhancer of what you want to do. And it's really a key to your success. And you really have to define what you want to do with your tooling. What we realized before is everybody has different tools. There was no collaboration. It was difficult to maintain and so on. So what we said is we set up the requirements for those tools about what we want to achieve. And one of the first things we wanted to do is about open policy, regardless if it was in your GitLab, where we've been using, or JIRA, that was our selection. It was about open policy. So everybody has access to the tool, and everybody can contribute to the backlog, to the source code, and so on, which was not the case before. So we did the change from a historically CVS SVN to GitLab and from version one to JIRA for the backlog management. And the selection of GitLab, at that time, in 2015, keep in mind, by the way, they didn't have any issue management in GitLab. So it was mostly around source code control. And the selection was about a better modern system, a modern tool, a DVCS, because we are quite distributed. And we wanted to have a central location for everyone to contribute. Another reason why we selected GitLab was they provided an on-premise solution. We were working with the government, and at that time, we couldn't have our source code on the cloud. So we installed your GitLab on-premise. And finally, in order to have a good adoption, the selection process and requirement came from our developers. It's really the fact that we empower our team to provide inputs about what they wanted to use, and they helped use the adoption of GitLab. And we purchased GitLab in 2015. And six months after, we had every single source code move from SVN to GitLab. And the adoption was very fast and easy to do. So organization structure, Alex Square, was very hierarchical. And your organization charts represent the culture of your enterprise. And we have to change that. So the expectation for the manager was no longer to review, validate, and decide for everything, but really about empowering the team and as them to understand what they were doing, questioning what we were doing, and so on. So the first transformation about your job was around product management and the team. We used to have two product managers for one product owner and one product owner for 42 engineers. And that didn't work too well, because we didn't have any enough product owner resources to really drive the team. So we changed that, and we put in one product manager, PO, for 15 engineers to improve the autonomy and the reactivity of the team. Another interesting move we've done is we removed the product manager and the product owner for the product end of life. Because what we realize is if you set up a product manager on the product, you're going to have, is going to drive the backlog, is going to drive the need of resources and the cost of this product. So taking away product manager from end of life product helped us to refocus on the strategic product and just set up a sustaining team for the end of life product. And finally, we set up new self and power team. And by that, I mean that the development team, we use a two pizza concept, right? Small one. But we had the QA resource. We had the development resource. We have the security resource. We have the tech rider and the team. All those different activities in one organization, in one team, to deliver value to the customer. And what we've done in parallel is to create a center of excellence, meaning we have a small group in parallel for security, a product security group that we're here to not write security code. It was owned by the development team. But more about providing guidance, providing expertise, providing security gates to make sure that every product coming out of RLD was secure and validated and so on. So we changed it completely the way it was set up. And it worked pretty well. And finally, we did technology transformation. We had some pre-old product, monolithic. We got some probably five million lines of code. And we quickly realized that we could not rewrite everything. But we set new guidelines. Everything had to be DevOps. Everything had to be microservices. And everything had to be cloud-ready and cloud-first on all the new development. Meaning that DevOps, we introduced new GitLab, Jenkins, unit test, functional test. Everything was automated, microservices for new development. So we had more flexibility and cloud-first. So everything had to be on the cloud. It was supposed to be easy to upgrade, easy to deploy, and so on. So we changed the mindset of what we were developing. And but at the end of the day, you can change everything in an organization, the process, architecture, the structure. But what's most difficult to change and what takes over is the culture. And the culture change is the most challenging to do. And it took a couple of years to get where we are. And in 2015, we were only on-premise software. Like I showed you initially, in H1 of 2019, we had a 17% growth on the cloud. And more than half of our revenue is now on the cloud. So it took some time. It took new five years to get where we are. And we're still working on it. But culture is key part of this transformation. And Vince, I'm gonna talk a bit more in details about how we made this change from a culture standpoint, sorry. And from a DevOps standpoint. Awesome. Thanks so much, Eric. And thanks, GitLab. I really appreciate the cool swag. This really saved me today because I brought a t-shirt to talk into. And this is fantastic. I've been nice and toasty. So, got one button here. We came up with this question. How do you change culture when we looked at how we delivered software? And when you're coming from an ISV that's used to traditional software delivery for on-premise product, that's quarterly deliveries, sometimes semi-annual deliveries. You wanna go to cloud native and SaaS where you're delivering every day. That requires a complete mind shift change and a culture change at your company. And this was the question that we came up with. How do you change culture? Show of hands. Has anybody read The Lean Enterprise? By Humble, Molesky, or O'Reilly? Okay, a couple. Okay, so you guys are in on the secret. You start with behaviors. And you start with behaviors because what we do is and who we are and how we wanna behave and how we expect each other to behave influences how we feel. It influences our values, our attitudes about our job and ultimately that influences culture. So we have a simple acronym here. So as we went on our cultural transformation to become more of a DevOps type organization on our journey to the cloud, we decided we wanted to keep calms. And calm stands for culture, automation, lean, measure, and probably most importantly, sharing. So in the next few slides I'm gonna show you how we automate for continuous delivery and really have an eye on continuous deployment. We're lean and in being lean, we continue to drive the culture of learning which helps us become more lean. How we measure DevOps at Axway and most importantly, how we share. Has anybody ever heard of the Dora Report? Okay, good few hands here. That's the DevOps Research Assessment Report. It's been published for about the last five years. It's somebody that loves to lead DevOps type teams as well as development teams. I've carefully followed this report. They've surveyed over 30,000 people over the last five years. They've gathered a lot of data and they've put together great metrics around software delivery and operational performance. And so the things that we started looking at at the beginning of 2019 were lead time to change, deployment frequency, change failure rate, mean time to restore and availability. And these cross all three quadrants of really your value chain when you go to build a SaaS product. Software development, software deployment and operations. So before we get to how we measure in 2019, I wanna say with how we started in 2018. I'd mentioned that earlier. We started as a company that was delivering software quarterly. We had the opportunity to build our first SaaS product in-house, not through acquisition. So we really had to retool our thinking and our behaviors in order to change our culture. Where we started in 2018 was probably around a medium performer between once per week and once per month. We did start developing cloud native services. They allowed us to develop a little bit faster but we were still really limited by the end of the sprint. So we were delivering software at the end of each sprint, which was every two weeks. And if we missed that, then it was at the end of the month. Lead time for changes, definitely between a week and a month. Time to restore service. We're still less than a day, which is good. And our change failure rate was actually higher. We were probably higher than 15%. We just didn't have a good grasp of how things worked in a distributed world. Our first foray into distributed system in SaaS was on Docker Swarm. So a lot of intervention and a lot of heavy lifting by our teams focusing on infrastructure and how that worked and not really focused on the product. But we set the goal of being an elite organization for 2019. By the end of 2019, we made it to high. And we touched in some of the elite categories but we only assessed as high because that's where we could perform consistently. And so we were consistently deploying between once per hour and once per day. Most of the time, two to three times a day. Between one day and one week was our lead time for change. That's what it still is today. Our time to restore service within two hours. So that was fantastic. And our change failure rate is definitely, it's less than 15%. So by the end of 2019, we did a fantastic job. It's really, really hard to make that type of a movement in one year. Going to elite, yeah, we wanna set those wild goals and that's great because the team can look back and reflect and see what we did and how we can get better. I'm gonna go into that in the next few slides on some of the other behaviors within Colms. So where do we wanna be for 2020? We're still aiming there. And I'm gonna go over some of the behaviors and our lessons learned about how we're gonna get there. Just to recap, if you wanna be an elite, part of the elite performing team, you're deploying on demand in the background. Multiple times per day. Your lead time for changes is probably our hardest one. It's gonna be less than an hour. And how we measure that is from commit into the master branch to when that change flows all the way into operation. Time to restore service, this will be a lot less than our SLA will be less than an hour. And the change failure rate still on par, less than 15%. I like this slide. It kinda shows the stages of evolution as to what are some of the outcomes and then what are some of the behaviors. So if you look at stage two, that's probably where we were in 2018. We were just getting started, lifting and shifting in the cloud. We had complicated rollbacks, not a lot of service ownership amongst the teams, some outages, lots of manual steps. 2019, much less outages, fewer manual steps and hours to deploy. We were on track about 10 to 20 deployments per month. To become an elite performer, that's, as I mentioned, employing continuously in the background, minimal to no outages, no manual steps in your value chain, in your pipeline, over 100 deployments per month. So let's talk about automation because that's one of the behaviors within Calms. What we wanted to do was automate and empower our development teams and trust them with making the changes that would affect production, all in one shot by providing tools that automate for scale. So these tools that we build, they need to be self-service. They need to have the capability and the resources so that development teams can take these tools and do their work without somebody setting them up and helping them out and basically a lot of care and feed along the way. If you want your development teams to run, the sonar scans, the internal quality checks, external quality tests, all the automated test frameworks, static and dynamic security scans, it needs to work like any good product and service, period. And one of the game-changing things that we did from a DevOps team perspective was we started treating our tools like products for our development team and we hired a product owner. And that product owner was a game changer. We really started treating our tools like products. We started eating our own dog food. Don't get me wrong, we leverage a lot of third-party tools. I'll show you this on the next slide, but really it's a synthesis of these tools in your value stream and that is the product itself, that synthesis that's responsible for reliably delivering your product in a SaaS environment. The other real great benefit of having a product owner on a DevOps team is the ability to bridge across other teams with their product owners or TPMs and understand what changes need to be made. Getting the feedback from development teams, operations team and security, really heading that up and rolling that into the tools, allowing us to communicate better. This is an eye chart, but it's kind of like a 10,000 foot view of our CI and CD workflow and it's there to show you all the different tools that we go through. On the left is CI, on the right is CD, just some high level points here. We want our goals with CI, we're common local development environments and pipelines they could use, common CI paths, a common image registry and a dashboard with basically build and vulnerability reports so that teams can get feedback on whether their code had security violations or quality issues. On the CD side, some of our objectives were basically immutable infrastructure and automated deployment tools for those. Common CD paths, so if a development team member wants to build and deploy microservice and use the CD path to go from QA into prod, they get a common pipeline that allows them to do that. We use Jenkins to do a lot of these things. Like I said, if you notice here, a lot of tools sprawl. One of the things that we learned here in 2019 is that traceability was difficult. It's difficult when you have to go across all these different tools, especially when you're going from Jenkins back into GitLab, there's a lot of link diving. And so the feedback we got from the development teams was the visibility with CD is good because you could see the end-end pipeline flow and see where things break in CD. But if you want the entire visibility of your value chain from CI through CD, it's difficult. It's hard to track changes back from Jenkins into GitLab. Lots of link diving as I had mentioned. And so they weren't getting the feedback fast enough. They were getting some visibility, but they had to work for that feedback. And that was great feedback that we learned. We learned at about 15 microservices, the complexity became a little bit much for Jenkins to handle. We kept having to add inputs into the jobs and when teams would go in and try to use it, it got hard to use. This is one of my favorite slides about keeping calms and lean. This is in Rio 2016. So we're coming up in Tokyo 2020. Does anybody know what happened here? They dropped it. Exactly. This is a dropped handoff. How many people here have had a dropped handoff from dev to ops or dev to security or security to ops? I mean, that's pretty much why dev ops exists, right? It's the silos that we have to break down. And there are a lot of parallels in our journey to the cloud. We had poor handoffs. We learned from our mistakes. We had to really work together as one team, not only just working together, but practicing together, right? And understanding how things work. And really, I would say most importantly from a cultural perspective, seeking to understand before you really understood. And that's a habit from seven habits, but really trying to understand the other team's perspective before you recommend a solution. Because oftentimes you're both trying to do the same thing. You're just articulating it in a different way. And that requires a lot of patience on the team's parts to do that. This team actually ended up winning gold even though they dropped this because they found that another team had bumped them. They got a second chance to rerun the race. I'm sure they practiced like hell to make sure that they weren't gonna drop that baton and they won gold. Same type of thing that we have to do is part of our journey to the cloud. As Eric mentioned earlier, we have a lot of the cross-functional aspects of the team, so we are more aware as a team and more empowered as a team to own services all the way into production. It's just not one whole team of developers. We have quality points of contact, dev-op points of contact, security points of contact. Sharing. Probably one of the most important behaviors if you want to go to a pure SaaS organization and you wanna operate as one team. Sharing is critical and the only way that we found that sharing really works at a high operating level is by providing visibility and providing transparency through dashboards and visibility in your tools. So I talked about the door report earlier. This is how we implemented that. This is a service level view. We've also implemented organizational level view. We'd like to open source it because we think it's really useful. We have the APIs built for GitLab. If we're interested, we can talk afterwards, but basically measures our deployment frequency and you can adjust the time scale. This is on a per month basis. So for this microservice, we have about 25 microservices and our one product. 19 deployments, lead time to change is about a day and a half. Our mean time to recovery is 12 minutes. Change failure rate is 5% and our availability is within our SLO. So this allows our teams to have the transparency as to how other teams are doing with regard to some of these metrics and help each other out. I expect a feature to come from a team that has an SLO that's breached their SLO for the service. They're probably gonna have to work on their stability for a while before that feature's gonna come out. So it also gives you the visibility as to what they're gonna be working on. I have a near blame, no blame. We implement a blameless culture. This is critical. This is basically one of the most impactful things when things go wrong that you operate as one team and no finger pointing. Really seeking to learn what happened. And I know we talk about root cause analysis. A lot of times it just isn't a root cause. There's so many different things that happen in that context of that change that there's lots of causes. And there's no finger pointing. The goal is really to learn and understand how we can improve our technical systems, our processes, or even our organization on how to repair those items. Basically you don't wanna have broken handoffs. That's where outages occur. So for 2020, based on what we learned in 2019, here are some of the technical and behavioral changes that we're gonna be making in our CI and CD. This is from the 10,000 foot view. First thing we're gonna do, based on developer feedback, is move all of our CI pipelines from Jenkins into GitLab. And the reason why we're doing this we touched upon it earlier is we want better visibility and faster feedback loops for developers and application code. We don't want Jenkins link diving. Developers are super happy about that. We're gonna be moving from a CI ops model to a GitOps model. And the reason why we're doing that is we wanna have both infrastructure and application changes to occur in a consistent manner and allow us to tell a complete picture of what's changing in the system for everybody to see. That provides radical transparency as to how your SaaS is changing and allows every member of the team to actually take a look and investigate and see what happened. My change went in with this change. Okay, what could have been the impact of that? Today all we can see is our app changes, infrastructure changes we don't have the view of in the same context. Last thing from a behavioral perspective is really improving ownership about our availability and that's implementing error budgets for each service. As I mentioned, it really galvanizes the behavior that the development teams are accountable for operational stability. If we fall below our error budget for the month we automatically put features on the back burner and focus on stability for that service. It really takes the guesswork out of how to prioritize feature work versus technical debt. Nope, wrong button. And so overall, cultural lessons learned. If I were to say one thing that we've learned in our two year journey just to take a step back is executive sponsorship is required and that's because culture change requires vigilance. It's like skateboarding. If you practice one time you're probably not gonna be that good at it, right? You need to keep practicing these behaviors to reinforce it and drive that change which ultimately impacts your culture. We continue to practice and learn every day. The tools and processes that you use on your journey are gonna change. So you just need to accept it. This is the world that we live in. It's part of learning and getting lean. Like I said, we use Jenkins till we had about 15 microservices and realized we can't manage the complexity anymore. We couldn't really foresee that. That's something that you learn as you go along and if we would have been able to forecast that we wouldn't have invested all in Jenkins. We improved our focus on the cloud and rationalized our product portfolio. More systems level thinking to understand what features across all of our products maybe make it fit as a service in our platform. And then lastly, changing culture by changing behaviors, empowering and trusting teams to make changes, providing them the tools and support and visibility to make them accountable for their service and production. Want to highlight quality, availability, security and scalability? That's everybody's job. It's one team providing the service. It's X-Way team. Learning to be lean. Continuous learning and practicing to get better every day. Sharing through transparency, visibility and measurement and being blameless. Want to thank everybody for joining. If you guys are interested in talking deeper about our talk, please reach out to us. We're on LinkedIn. Also, if you're interested in our journey along the cloud and want to join our team, we have open recs. Please check us out, career.axway.com. Thanks.