 So thanks to everyone who attended the last talk of the silicone, it was a very long day and I really much appreciate everyone who joined me today to listen and to maybe share some feedback with our journey in RingCentral, what we are doing and when I started to preparing this talk, this presentation, I go on through several phases, I try to talk about different things but in my opinion, the most important part is not what you can read in documentation or what you can get online or I could not give talk about the flags or any other stuff better than the creators of these tools who are present at this conference. What I will try to do today is to share our journey in RingCentral, how we ended where we right now with continuous delivery, what we did was good and what was bad and how to avoid some mistakes. So like real life examples, real life. So I will try to do it as a kind of story. So with prologue, pre-acts and epilogue, so it will be a little bit more fun when just a regular talk, but we'll see how it goes. So with prologue, I will introduce our actors. The first actor is me. Nice to meet you, my name is Ivan. I'm working in RingCentral pro year 2015, but I had a break. I left RingCentral and rejoined, started working in one of the startup companies. But essentially, more or less, I worked in RingCentral for six years, launched multiple projects, real projects. I was a cast, I started this project, led them from start to finish. This project landed in the hands of real customers and with thousands upon thousands of customers, happy customers who are using this project right now. I am currently director of engineering at RingCentral and my current role more or less is I work on the latest project, on the video project and I will tell you a little bit more about the company itself. So what is RingCentral? RingCentral is, I don't know, most likely it doesn't anyone heard of RingCentral before? Like, yeah, wow. Yeah, so it's a public company, that's quite a big company, which provides so-called unified communication as a service and what's one of our products called MVP, media phone, sorry, message, video phone. And what it does, it provides to businesses around the world ability to organize your communications with outside world and with inside world, meaning all the communication inside the company, like a phone calls, messaging, video conferencing. So messaging things Slack, video conferencing Zoom and phone is just the regular telephone system. And the company, it's quite, like I said, quite a big company, we have global presence around the world. So North America, it's South America, Southeast Asia and Europe. So we are present almost everywhere in the world. So for example, if you know some of the big companies like AT&T, Telus, Verizon, Vodafone, all our partners, we work very closely with them. And most likely, even if you do not know about ring central because we work in a bit of a space, you most likely encountered it before when you called, for example, Telus and was called on the line or something like this. All this was handled by the essential servers. Not all, but some of the stuff is handled by the essential in our system. So it's quite a lot of deployments around the world. So one of the thing, one important stuff which we are proud of is we have like five nights uptime SLA, which we are constantly uphold year after year for multiple years. Because when we're talking about this communication, it's the last thing which should go out. So multiple things can break, but you should be able to communicate with your customers and inside your organization. We also provide our services not only just regular businesses, but to small and big enterprises as well as the government organizations, healthcare institutions and universities and educational institutions. So we have multiple attestations, multiple certificates, all the stuff. So we need to handle it. We need to make sure we are compliant. We need to make sure we protect our user privacy and all of that. Quite a big challenge in a continuous delivery world and when we're building pipelines and all this stuff. So this is our environment. Yeah, so it's a given state. We cannot, we must somehow in this given state make sure we can live in a modern world. Meaning we need to not deploy every quarter like most telecom companies do. Not most, but big amount of telecom companies do. We want to do it fast. We want to give our customers value as soon as we could. So how do we, what steps we took on this journey? So is it act one? I would say NS may be both most important act of the journey. It's a year 2015 when I joined the company and I started to work on analytics project. Analytics is, it's called real-time analytics and it was one of the most, it was even to this day must like successful internal project because it started completely from scratch from the green field. And to this day, our real-time analytics, our quality of service analytics and everything we do for our customers in terms of way, analytical capabilities is considered best on the market for UCAS providers. So how we did it? So it's interesting, I will, so things I will talk here will be a little bit controversial. I do not want to pit technologies against each other. I do not want to say these technologies work it, these technologies don't. I want to pit methodologies and way of thinking against each other because we will see what I'm talking about later. So we started in 2015 and we had two big contenders in terms of containerization. Before everything was run in virtual machines. So it was VMware cloud, VMware on our own hardware and we started, okay, we need to containerize, we need to start using Kubernetes. Again, here is 2015, Kubernetes is kind of very young but we also another contender, it's a Mezos and Mezos smartphone which we try to do as well, but we took two completely different approaches. One approach was DevOps and another was DevOps. So what do I mean by it? It depended on the focus, if I'm like this DevOps circle, I'm not circle but infinity symbol, one is there, one is DevOps and they like interconnect. It's usually very heavily from my experience is very heavily focused on Dev side or on the Ops side and they not evenly balanced. In our case, it was the same. So we had DevOps approach which was heavily operational sided and DevOps approach which was heavily Dev side. I was worked on the Dev side and another team, big team worked on the operational side. So I would say like two things to approach this process kind of DevOps and department kind of DevOps. So when we're talking about the process side of DevOps because we launched the Greenfield projects, analytics was the Greenfield project, we focused on end-to-end ownership very heavily. So we try to give our customers product as fast as we could and we were able to build our first version towards first real paying customer in just four months. It was very quick for the company like Ring Central, trust me, some projects takes years to launch. While the department of Ops, they focused on the operational side and they wanted to build a tool set. Like everyone does, even many companies does right now. So we hired DevOps engineers and we said, okay, we're gonna do DevOps by building tools and doing all this stuff. So developers could use this new like measures, use all this stuff and we concentrate on the tooling. Another thing which was differentiated between the operational DevOps approach and like process DevOps is we focused on the money repository, everything was configured in one money repository and it's really sticked. So we always started to use money repository from this day and going forward all our projects use money repositories and it's I think the best possible approach in organizing key work because we have ability to atomically update configuration, update code and package them together and there's a lot of tools which helps to work with money repository like environments. So yeah, we focused on the delivery, like I said, we wanted to push updates to our customers like daily. Even before like we knew about the Dora metrics about all this stuff, we just wanted to make sure we can do it fast as fast as possible and we rejected some processes. And from the downside, we created our own tooling because again, year 2015 where it's like no service mesh existed, at least the working condition, there's no operators, no like great pipeline tools so we written everything from scratch. We use Scala programming language at the time. And we didn't document anything because we were focused just on the delivery and of course because of that, we had like really poor documentation. So I could say both these approaches, I would not want to wait which one was better, which one was worse. I only speak from my experience so in our organization, our company, the second one never worked. I witnessed multiple projects which failed on its tracks or it's failed later in the year or it's canceled after the year or it's achieved very little meaningful output. And I think the main reason here is because people were focused, was not focused on the customer. And there's one great talk yesterday it was called process or not, people process tools. Yeah, and I think what we did here, like unintentionally, we started with people and trying to mold the process before we worked on the tool tools. So this was very, very important and yeah, but everything which matters is the customers and like a code, code we build and customers we serve. If anything in between just noise, it's lost effort, it does not provide any value. So the very first lesson we learned and very important stuff without, which should be starting point in like GitOps and anything we build, at least from my experience as established DevOps culture. Without it, you will build tools. You will not, again, from my experience, very hard to succeed. Never seen a successful project which was focused on the operational side without thinking about the cultural change first because in the end, after you did everything, you will just strike this resistance from the development side, from the business side and project most likely will be there and die. So it's a prerequisite. So fast forward, year 2019 and I worked on, I did additional projects but I started to work in AI. Hot thing at the time, hot thing, right, even hotter right now. We were building services for speech recognition, computer vision, conversational intelligence, all this very interesting stuff. Transformers was published recently so we were excited and working all this stuff. And when we started to do these projects, another big change happened in terms of how we approached our development and what tools we used and this change was GitOps and we found out what Flux exists. It was version even one or even zero something like it's very first version of Flux and very first version of GitOps and again the spinnaker versus Flux did not say one better than another but what we very heavily understood and it was like this moment of clarity is we started to think not in terms of pipelines but in terms of conditions. So I constantly hear in this conference pipeline, pipeline, we build pipeline, we do pipelines. I think pipelines are evil. Yeah, so pipelines are very hard to maintain in a real production scenario. Pipelines are hard to build and pipelines can be very slow. So I can give you an example, a real life example from the central point of view is we have a pipeline, we can build a pipeline. Okay, we started to building a simple pipeline so we build something, verify it, test it and then deploy it to production, for example, to production, simple pipeline, it works. But when you start to say, okay, I need to run different kind of tests. I need to run regression tests and what's about manual gateway? How do I accommodate this pipeline? Okay, maybe I need to run pipeline in staging because this is critical change and how do I define which change is critical? Okay, I need to add change management procedures. I need to add additional security checks. I need to add, and pipelines just grows, grows and it never finishes. Yeah, it takes too much time and when you started to change pipelines, tooling changes, it's a mess. Very hard to reason about. So here, when GitOps appeared and we started to gather this puzzle and we started to think about our software as not as a delivery cycle, not as a pipeline but as set of conditions which must be satisfied. So what do I mean by it? I mean, if I want to release, so how does it, yeah? So, and I think this condition is a slider in my head, yeah? So, and these conditions are equal maybe to, equal to requirements, equal to something else. I do not have good name for it, yes. So I went coin it when I was preparing this presentation but when I have a piece of software which need to be run on a specific environment and here a common denominator is something running, code's running on an environment. In our case, it's a Kubernetes cluster. We, different types of environment for the same piece of code has different conditions. I mean, like in lab, you just need the working code to be able to run it. In CI environment or integration environment, it needs to be not just working but like, be at least without crashing bugs or not crashing all the time. In Canary and the preview release for internal users, it need to pass some security checks, some additional checks like quality gates but it not need to be really robust. If we deploying on the other hand to our customers like a partners or companies like gas hospitals and healthcare institution, it need to have very rigorous security checks run and we need to make sure what we run, we do not introduce any kind of back holes and I'm trying to malware. So it takes a lot of effort to do. But every environment has different requirements and as soon as you satisfy these requirements, you can deploy. So instead of just a pipeline which runs sequentially, right now you only have, like I satisfy this condition, I can deploy it to this environment. This changed our way of thinking with GitOps essentially. So GitOps allowed us to start thinking in this manner. Yeah, so we have Git's repository which has all the environments and all the artifacts on this environment and now we only need to make and shoe basically what we satisfy conditions. So how we do it, we changed approach and we're gonna change it again, I will tell you a little bit later but before few additional pieces which was very important in this process. The first one was like an old brainer, we had talks, we've seen talks today about the cannery releases and cannery analysis and one of these tools which we heavily use for all our stateless deployments is a Flagger. So is anyone familiar with a Flagger? Okay, so it's a tool which also developed by VifWorks which allows you to basically configure cannery releases very easily. Very easily configure very easily cannery releases. It makes, so you can define like canary, red, red, red, black, blue, green release types. You can define the cannery analysis routine and it's why it's important because when you have this work with GitOps, you really just change the environment but it does not, you can already different talks with say today changing the environment could lead to like problems. So you need to additional verification. So Flagger's helped with that. Very important tool, look it out. Another tool which we also heavily use to satisfy the condition is a Test Cube. So Test Cube allows us, again part of the Kubernetes infrastructure allows us to run tests in parallel in Kubernetes cluster and use API to verify what the specific type of conditions and test was, in fact was run as an audit trail. So we will really launch this test and this test passed and we can rely on this like a requirement like I said. We also use Caverna very heavily because one thing is to have all these requirements set up and have all the audit logs and all the trails about what it was really launched. Another thing is to enforce it. So we use Caverna to enforce what all the requirements are in fact really exist on the piece of software we're trying to deploy. So like combining all that, we started to, we were able to change our tooling yeah, so we were able to replace one with another. We started to see what the benefits are. We started to work with different types of pipelines and it's really made everything a lot easier. So an additional thing which is, we also was very, very important for us and I always talk about it inside the company and trying all our developers, all our engineers to adopt is a hermetic build. So what does it mean? Hermetic build means when you build something in different environments, you always have the same result. So it's essentially hermetic. So I built it on my local machine, I built it on one CI server, I built it on another CI server. I have one artifact with exactly the same control cell. First of all, it's very convenient so you do not have this environment issue, like you can build and reproduce it locally just from the commit. But another very important part here is ability to build it from the security point of view, build the same thing on the multiple environments because this is one of the security checks, what you need to ensure, what you have like secure build environments, all this stuff. So you can build on multiple environments and if we have drift of the control sum, we can figure out which environment could be compromised or we could have some problems with it. It also allows us to deploy the same build everywhere. So again, like federal agencies, FedRAM, or institutional healthcare institution, we have very strict requirements where this build could be built, how it built, what environment is used, who has access to this environment. And because of that, if you do not have hermetic build, you can essentially get a different image and you deploy one thing in one environment, another thing in another environment and you're praying to the gods, maybe because it was built from the same sources, it should align. It's not always the case. So hermetic builds is very, very important. For that, we're using NixOS for base layer so it isolates it on the library level. We're using Bazel as a build tool and we orchestrated with using GitLab pipelines, using GitLab internally. Yeah, so moving forward today. So right now I'm working on the video project and this is, like I said, why are we using these tools like a flagger, like a test hub, which part of the Kubernetes infrastructure because we have APIs which is easy to integrate and when we have these checks, we have to implement our own way of thinking, how we integrate it without pipelines, how we call one API calls another, our checks are satisfied. It's not as easy. But very, very cool thing happened recently. And this is ability to support OCI manifests and the ability to sign them. So what I mean by OCI manifest, Flux allows you to package all your manifests into container image, OCI image in this case. And this container image is immutable. Yeah, it already contains all the deployments, all the container images you need, all the settings, everything, like secrets, config maps, all the stuff. And then you can sign it and you can sign it multiple times. So what we do right now to satisfy this condition, when we build an artifact using, we build an OCI artifact, when we just sign it very easily. So test passed, we have a signature. Security passed, we have a signature. Even manual QA regression could sign it. In some companies, we still have manual verification step. We could sign it. Secure builder, build the same kind of image. We sign it. We sign it like we can have 20, 30, 50 signatures on the image. And then what we do, we just basically all our environments are always verified. If image contains all the necessary signatures, it can be deployed and it automatically promote it. So it's completely asynchronous workflow and OCI artifacts and cosine simplifies it to the level it's really easy to do and really easy to reason about because you have an audit trail, you have a signature which you can trace and you can trust. And yeah, and you can do it and different steps in this process, in this condition, do not need to know anything about each other, which is also very beautiful. So you could easily change it. Oops, so in epilogue, I had a different epilogue before, before I attended this conference. This epilogue was about tech radar. So we used and created a tech radar. And what it was, it was like, anyone knows about tech radar? What is it? Okay, no hints. So tech radar is, again, really nice things inside the engineering ecosystem. It's like, it's a circle with different quadrants which are divided and adapt, not adapt, trial, et cetera. And you can easily move some technologists and say this technologist is good to adapt, this technologist we should deprecate, et cetera. So I thought in epilogue, talk about the tech radar, but I thought even now, it's not as interesting. What is interesting is during this conference, I encountered the project called CD events. I never heard about it before. But this project is very heavily resonate with stuff we do ourselves internally. So we're trying to do this pipeline delivery. And CD events, it's a specification. It does not have real implementation, like real, real implementation right now. It's only like POC. But it tries to define this common ground between all the components, common APIs and how each system in continuous delivery, each part in continuous delivery ecosystem should, what events it should provide, how it should be consumed, et cetera. So it can be easily integrated into something much bigger. So what I think in our case, in case ring central is, yes, we're using signature, it's a very beautiful solution right now, but we'll definitely go and look into CD events because it looks like finally, we're starting to get this level of integration which does not rely on a pipeline, on a Jenkins pipeline, GitLab pipeline, Tecton pipeline, et cetera. Yeah, just completely independent systems, completely independent tools which could talk one language between each other. So it's a beautiful solution. So I really, again, it's my recommendation. Go look it out. It's a great project and I think it has very bright future. Okay, so yeah, let's end. Any questions? Yeah. I was yesterday in CD events and look like it's work and now you told how you make it in your company, you not use pipeline, you use different environments and build something for these environments and I remember CD events work same. Yeah, yeah, yeah. So I think like we completely came to this independently. I think it's additionally verifies and says like if we did something in the same manner community is trying to move, I think we did the right stuff. So for me, it was very like, I was very excited. So it was pleasure for me to hear about people. We did the right thing essentially, it's a validation. But we build ourselves but I think you should not, we will have CD events and if you want to move in this direction, I think it's better to take existing tool and at least existing approach and try to adapt it. And yeah, thank you. Thank you so much. And also one thing, sorry, because Vancouver is a really beautiful city and a beautiful neighborhood here. I live in Vancouver like that. So if you have time, go on to the hike. It's an excellent weather. You can go up north to Squamish, hike on chief, hike on Groose Mountain, or hike like Eagle Bluffs here on the mountains. It's experience of a lifetime. Thank you.