 Hello and welcome all of you out there in the cloud and thanks for joining us. My name is Micah, I'm here with my teammates Hillary and John and we're part of Accenture's Cloud First Group. Accenture's $3 billion investment has made this past year to help clients across all industries become cloud first businesses and accelerate their digital transformation. So today we're going to share with you our experience building a solution to deploy and support a Kubernetes application that scale on the edge and on any cloud. So John you can go ahead and kick us off to the next slide. And I'll do some brief introductions of everybody here. John is a director within Accenture's Google cloud practice, as well as cloud first and as well as the global Anthos and Kubernetes lead for cloud first. Over 25 years experience designing delivering complex tech architecture all the way from on-prem to multi cloud. John is your guy. You need help with any of that. Hillary is a rockstar software and platform engineer with within cloud first design group. So cloud first design is basically the tip of the spear for enterprise cloud transformations. So Hillary engages with big clients on big transformations and with that becomes often big hairy problems. So Hillary is also a mentor with Girls Who Code supporting gender inclusivity in technology. So if we were in person you all clap for that right now. But lastly I'm Micah and I also am in cloud first design group along with Hillary where I architect solutions and lead engineering teams in transformational projects as well as Greenfield work. And on the outside of work I like to mentor young men, especially those who are breaking into technology from non-traditional backgrounds like myself. Let's go to the agenda, John. All right, so today we're going to state and share with you our problem that we had to solve and then the method that we sort of took and the maturity model that we used to come up with our eventual architecture and solution. And then we'll share with you a few lessons learned as well so pretty basic approach and yeah let's kick it off. So I'm going to start by introducing to you our client. So our client provides solutions for the life sciences and remember we're consulting company right so we have we have clients and we have customers I'll try and make sure I make the distinction for you all today. But our client provides solutions for the life sciences by provisioning scientific instrumentation for their customers picture scientific labs right so they have a new services layer data platform that is going to connect their scientific machines to their network. You can think of it like a common services layer. And what we were tasked to do is to build an installer that would then deploy that application to their internal network. So a lot of our clients customers insist on self hosting for data privacy privacy reasons, which may mean that they host in the cloud or some host on premise. And some of the small customers are even ordering an entire bare metal machine from our client with the required packages pre loaded onto the machine for them to then self install with our solution. But our client ultimately wanted to provide we identified these these sort of main five things. Number one is excellent service and support to their customers might go without saying, but this kind of was a basis that that we kept coming back to. But that would also mean practicing operational excellence so robust and resiliency resiliency was among probably one of the top of the most needed illities in this solution. The solution also needs to be extensible to a variety of needs and environments. So with different it departments from their customers. That would mean each having you know their own rules as the customer base scales so does the variety of the needs. But what's nice about the solution is that it's supporting a relatively new application that was written in kubernetes. And so it makes it easy for us to deploy anywhere. What it came to cost efficiency that was something that our client wanted to obviously save themselves but also their customer money. So that was important. And lastly a solution that's relatively future proof. So that would last for years to come like I said the underlying app itself is is brand new right and so they want to be the solution that would be installing it to support that as well. We go to the next slide. All right. So meet Albert. Okay, Albert is our representative end user he's a scientist in a lab with or without lab coat, depending on his exact setup among different customers so Albert comes with a few constraints number one. His data is top secret. So john you can show that next portion. I've broken this down a little bit for you so you're going to have to click extra make you work a little harder. So his his data is is top secret it's classified right. That means we need to handle it very securely very carefully. So we saw for this in a couple ways. First of all we have a self hosted infrastructure. So already we're enabling Albert to host this within his own infrastructure to alleviate his concern of his data being merged with or seen by any competitor for example, being his own infrastructure. We also needed to provide a way for him to supply his environment credentials in a safe and secure manner. So credential management was also an important part of our solution. So you can see Albert is not great with computers. So we saw for this by providing both a managed support model, as well as creating a custom user friendly UI. So a proprietary app that we built for our client where, where the customer where Albert doesn't need to interact with any cloud consoles, or run any cube cuddle commands. All that is abstracted by the front end. So last third thing is that Albert's it guys will not let anyone access their environment. So he's just found directions but the, the big bad it group says no go. So we did a few things here we had to look for a way to run the app installation locally. So we, Hillary will tell you later on about the the local pipeline that we built for that. Then also this required flexibility is important as the installer needs to be configurable flexible enough to meet. Again, like I said a variety of different IT group requirements. So if this wasn't all complex enough, John, you can go to the next slide. We have a lot of Albert's. So add to this the complexity that we have to serve what are inclined where our client would anticipate is to be hundreds, even thousands of customers will more click for me, John. So that means we have to support, of course, a even larger variety of clouds and environments, and scaling to this many users made it really important that we would reduce that margin for error. And so making it as low touch and highly automated as possible was key. And lastly, in order for technical support staff to support so many customers. It was really important that we found a way to provide what essentially ended up being a single pane of glass into the operation of their Kubernetes clusters. And this was huge and really exciting part of our overall solution, both to us and our client, as they're going to be, you know, in good hands for for years to come, but I will not spoil too much here. Without any further ado, let me pass this along to you, John. Thank you very much, Michael. The presentation is the methodology what we did. So our approach for the project was to leverage a cloud native solution by utilizing a Kubernetes platform that allowed for the environment challenges of the project along with leveraging the best third party tools in the industry. And combining that with the industry best practices. This solution resulted in the recommended platform for the project. Over the last five years, I have been talking to clients about their journeys. There is during the cloud. There's also the DevSecOps journey, as well as the Kubernetes container journey. During the last six months, I have shifted this conversation, I have turned it on its head. Today, every client is multi-cloud. We all agree on that I believe. And it is a combination of all these journeys. It's a combination of journey to cloud, the DevSecOps journey, as well as Kubernetes container journey. Today, the conversation is the cloud native journey. What does being cloud native really mean? And why is this so important? What should you know to accelerate and optimize this transition towards being cloud native? Enterprises that migrate from an off-premise to a cloud native environment need to rethink the infrastructure requirements. Migrating to a cloud native environment, and the infrastructure becomes spread out among multiple IT environments, while applications are distributed to support the enterprise's digital transformation. As your IT infrastructure progressively removes to the cloud, it must assure that the visibility approach along with the cloud native strategy. You will not only need to understand the business value of developing and deploying cloud native applications, but also how to deal with the strategic IT management challenges in the multi-cloud and hybrid cloud environments. In the most simplest terms, a cloud native strategy enables services to be used across applications and other services. It's about how applications are created and deployed, not whether they sit on public, private or hybrid cloud. Cloud native applications are designed to scale horizontally rather than vertically. To scale these applications that regardless of technology and concepts, such as agile, DevSecOps, multi-cloud, hybrid clouds and microservices. When we look at enterprises cloud native journeys, we find there are four technology enables. The first one be Kubernetes. Kubernetes provides the means by which to automate the deployment, management and or scaling of applications containers across the infrastructure. Second is service mesh. A service mesh is a way to control how different parts of applications share data with one another. Serverless computing allows you to run applications and services without the need of providing services. Fourth is a cloud control plane. A cloud control plane provides a consistent development and operation model experience across hybrid and multi-cloud environments. One DevSecOps pipeline and one place to manage all my clusters and policies. When I work with clients, I'm getting them proficient and able to scale with a cloud native approach. I start with this five-step scale. There should be no surprise that most clients today are at a level of two. At level two, they have a standardized approach, set of tools for certain functions and start scanning their containers at build time and at rest. My goal is to get them to a level of four where they have a mature capability in place to support DevSecOps and are becoming skilled supplementary experts and have run-time scanning enabled at scale. And ultimately, they can scale to level five where they're doing really cool things like data ops and blue cream deployments. As stated, there are five stages within the cloud native maturity model. While you may be in stage two or five in one application, you may be at different stage and different application. Keep this in mind as you're definitely following stages of maturity. So let's break down these stages further. The first one is at level one. You have a baseline cloud native implementation in place and are in pre-production. Level two, I'm repeatable. This means that cloud native foundation is enabled and you are moving to production. Third level is consistent. Your competency is growing and you're defining processes for scale. The fourth one is optimized to improving security policy and governance across your environment. And last, we have a level five leading. You are revisiting decisions made earlier and monitoring applications, infrastructure, optimization. Based on a cloud native maturity model, we have developed the kind of graphics projects. The aim is for organizations to start the cloud native journey with a real framework on how to adopt these new applications and platforms. The authors wanted to provide a cloud native framework for success. We want to educate and inform users with the effective and practical guidance to help them understand the cloud native ecosystem. We do this by collaborating with groups inside and outside of the CNCF. So please check out the links provided at your convenience. On the edge. When we looked at the needs for the client of this use case, we compared k8s and k3s. k3s offer small applications to run clusters and IoT devices. And that's great for running containers on the edge. However, for this client, we need the scalability and the ability to run workloads across multiple environments, while k3s can only host workloads running in a single cloud. k8s gives you the ability to scale an application based on the quality of your coming traffic and quantity. For this reason, imagine pride is why we chose k8s for our solution over k3s. The k8 platform chosen for this project was Google Anthos. With Google Anthos, we can perform it Kubernetes clusters. Whether it's American Amazon EKS, Microsoft AKS, Red Hat OpenShift, and so on. I'm able to perform over clusters in one single pane of glass. And then this also includes, by the way, converting Raspberry Pi clusters or clusters on my own laptop. And I'm able to take all these different environments and place it in one particular platform for the Anthos. And for the client, this was a home run for us. Awesome. So as we discussed earlier, we are looking to deploy into a lot of different customers' environments. So based on their preferred environment, we're looking to meet their self-hosting needs. Now, how do we do this? For this specific installation, as John had mentioned, we use Google Anthos. So Anthos helps by supporting a single flavor of Kubernetes across the cloud and on-premise environments. The single flavor of Kubernetes allows the application teams to have minimal considerations when deploying into the different environments and continue focusing on the application instead of programming for the changing environments, which definitely helps our platform team. Now, in the following slides, we are going to break down the solution by discussing the following. How we brought DevSecOps processes locally, how a local Windows installer enabled a low-touch provisioning process. Additionally, we're going to discuss the tooling that supported the environment agnostic deployments. And finally, we're going to walk through the additional considerations that we had to look at when deploying the data platform on a bare-metal solution. Next slide. Awesome. So how did we take these software best practices to our client's laptop? Next slide. Automated deployments into private clusters with focus on security is critical to having a mature Kubernetes deployment as John was talking about earlier. In the industry, it is fairly standard practice to run either a self-posted DevOps agent within the virtual network to access the private cluster or to access the cluster through a Bastion host. The DevOps agent and Bastion host both provide secure ways to connect to the cluster but also help provide a consistent environment such as operating system tools already installed in it in order to have a consistent deployment into the cluster. This is key when considering that our deployments are going to be happening from a Windows computer that does not have any prerequisites installed on it. So the challenge was how do we provide a low-touch customer experience, follow DevSecOps best practices, and securely connect to private clusters, all while running it on a Windows computer. We achieve this by building an installer that deploys an orchestration VM, or what we call an orchestration VM, into the customer's network. So this orchestration VM serves as a secure control plane for deployments to be consistently run regardless of and fully independent from the customer's computer. This virtual machine serves as a cross between a DevOps agent and a Bastion host as we were discussing on the previous slide and allows us to bring those DevSecOps processes locally to the customer's computer. Now, let's break down what's happening in this image. So starting at step one, the customer is going to submit a form with specific inputs that the IT group can put in. So, side block ranges, additional configurations that we allowed them to customize in order to meet their IT group's needs. So these inputs that they'll fill into a form feed our infrastructure as code scripts that will then be executed by the installer. So once the customer fills out this form with the IT group's specifications, the form submission then triggers the beginning of the building and the environment. So everything that's on the right-hand side of the image. So clicking this execution order and kicking this all off is similar to kicking off a DevOps pipeline. It's then going to build out the virtual network, which will end up folding the remaining infrastructure including the orchestration VM, which is where the rest of the script will run. So once the VM is up and running, all the build packages will be transferred from the installer so locally on the customer's computer to the virtual machine, similarly to build packages being provisioned onto a DevOps agent in a pipeline. So once all these prerequisites are met, infrastructure as code scripts are executed through the orchestration VM. Not to hit the nail on the head again, but bringing the DevOps process locally, running just as you would be running a pipeline, but just controlling it all from your local's Windows computer. So this remote DevOps process is neared across all the cloud environments to run the infrastructure as code script securely in the various clouds and meeting the customer's needs and allowing the IT group to take their customizations, plug it into what's essentially a Google form, and then they walk away with a secure Kubernetes deployment that is running a data platform. Login monitoring is already in place and all the operations are all connected back to through Anthos to GCP, which then allows the operations to be managed and handled externally from our user Albert externally from his IT group. Everything is securely deployed but managed by our client. Alright, so what tools and considerations needs to take place in order to allow this data platform to be running in all these various environments. Next slide. Tool selection was critical for the success of this cross cloud and cross environment deployment. By choosing the right tools and similar processes it allowed us to support solutions between the clouds or at the very least, mirror the steps. And here's a quick outline of some of the tools that we selected. We are using Stackdriver or Google Operations Suite to build out the login monitoring in our various environments this plugged in well with Anthos and the sort of monitoring and logging that it already provides. However, say in an environment such as a bare mental environment where the customer are choosing to deploy on a single server, have everything within their own virtual network, they already have fairly strict and considerations and requirements that their IT group has set forth. So a customer like this may not want their logs to be sent externally for security reasons just to meet their IT groups requirements. So that's where a Kermintius and a Loki may come into the mix enabling the customers to allow a deployment that is acceptable to their IT groups requirements. So as alluded to earlier, Terraform was our primary infrastructures code tool in order to configure the various cloud environments. And again, all the cloud environments were run through that same similar process that mirrored each other. So as we were looking to build out an environment, the cross environments and allow the configurations to be customizable to meet the IT group specification. Testing served as a critical pillar to rapidly validate the deployment. So later on, Terra test was added in order to validate the cloud environments prior to the execution installer. So that ended up super hand. All right, next slide. All right, so bring this back to Albert again. Albert and his IT group want everything securely and securely deployed locally but they don't want to be in charge of the operations Albert has other things to be worrying about he wants to be running the machine learning processes in using the application but he doesn't want to worry about the day to day of managing a Kubernetes cluster. So in order to support customers like Albert, we built out a customer service model where all the clusters link back to the same GCP project. And it allows a single team to manage the various deployments with a single painted class viewpoint, allowing the logging monitoring and alerting configurations to continue to be extended across all the clouds, and providing a consistent management experience for all the customers regardless of their cloud or environment of their choosing. Okay, so I'm finishing this section up just to talk a little bit about the bare metal use case and some of the decision points that we came across in order to build out this environment for our data platform. So bare metal Anthos provides the benefits of the cloud, bringing them into an on premise environment, but it doesn't fully remove the challenges that need to be addressed when building out a bare metal solution. So here are some of the decisions and trade offs that we went through in order to support this data platform locally. So networking is sort of the one of the first decisions you kind of need to build out when gathering your requirements for bare metal environments. So looking to understand whether or not you are going to be running a singular cluster or if you're going to be managing more than one cluster that will need to be speaking to each other from different servers. So our specific use case, the customers are going to be running within a singular server the platforms can be fairly isolated within itself and so we did not need to build out a bridge network instead. We are able to use the configuration of a standalone cluster on the server and then have everything sort of encapsulated within itself and expose on a load balancer. So, the next thing that is good to that we found was helpful to build out and get the requirements on for our cluster was the source requirements so understanding what are the persistent volume claim needs and how this plays into the sizes of the nodes that you're going to be provisioning was crucial in order to build out and properly partition everything on the server. So our special so our data platform has an additional use case of it has a lot of raw disk memory that is needed in order to run the platform. We also needed to take into consideration the additional space that is needed to run NFS volumes and make sure that everything is securely and maturely partitioned so that the data platform can run without issues and continue to run without issues as more and more data gets gathered gathered and analyzed. So that kind of like leads into the final point of which is crucial for your application teams to consider when building out this environment is how are we making sure that the single server is not a single point of failure and making sure that the disaster recovery tools and systems are in place in order to make sure that everything is all set. So, yes everything's running on a single server but it's also key to be backing up externally from this server so say if something goes wrong, it's not going to be a one domino takes down everything. So yes everything's running on one server but also making sure that it's not exclusively the server that is dependent on everything that's running in this data platform for that specific customer. And so, yeah, that's how we looked at the bare metal needs for customers like Albert and taking our IT groups considerations into into our solution when we're going to toss it off to Micah to talk about lessons learned. Can you guys hear me okay. Good. Thank you, Hilary. Yeah, so, you know that we learned a lot on this project. Also had a blast. It was ultimately was a lot of fun learning about the product and the technology behind it for those of us, really all of us I think learned, you know specifics about the technology we were sort of solving for something that we hadn't seen done anywhere else in the industry quite like it. So, number one I'll just start with this, you know, in general, if we talk about tenants, what a tenant is right is, is it's a principle that that really help us to come decision time commit one way or another. And so that is, you know when we come to a fork in the road, and we're torn whether we should go right or left. We come back to our tenants to remember what's most important to us. Doing so is especially important when designing a system from scratch and on a short timeline, or really in any agile fast based environment so it was crucial to have these things, these tenants considered are for considering trade offs I should say, as that really is like an everyday occurrence at that point when you're when you're starting from scratch so first of all number one is one of the most very first questions that you want to ask is to ask what is most important to our customer. So we can call this after we call us our ultimate tenant of tenants, right. What Jeff Bezos and crew have popularized as customer obsession. So as I mentioned before for consultants our client is our customer but ultimately this really comes down to the end user of the product what is most important to them. So keeping this in mind throughout the project was very helpful to anchor all parties that's, you know, tech business product but the whole team on on what was most important. Second is that we were intentional about having a bias for action now on our project specifically, we had a fairly junior team, not a ton of experience directly within a team and so it was all the much, all the more important that we we tested our theories that we asked for advice from others outside of the team, but also to be intentionally confident. So, you know junior or not I think these these tenants to them we're sharing because they're applicable not just us but hopefully it's a lot of you and across different projects. So if you're not it may at times be easy to second guess yourself, which isn't a bad thing in itself but if you've already tested the theory and or asked for advice, then to remember that you're here for a reason and so go forward with that. You'll never be 100% sure about a decision or a technology trade off you know that's it's pluses and minuses and so we have to the end of the day make a call and go for it and be confident with it. And knowing that also you know it's okay to to fail or to to realize later that you want to pivot and make a different decision. Number three was that we were, we realized to remain faithful to our definition of done. This was something that really served us well from. From considering lease privilege, each time that we rolled out a new change to the infrastructure to to have to consider lease privilege just not that we would have it 100% tied up but at least not leave this to be a snowball at the end. That needed to be addressed. I think it saved us a lot of time. In addition to that similarly test driven development, writing those tests early on and up front. And, and in the same in the same is automation and low touch so adding all of these to our definition of done from the outset was really helpful now. I think that you know any user story before we called it completely finished we made sure that we had done these things which also made sure we had allocated for them. When it came time for sprint planning. I will say that we later and probably up front we should have added documentation to this definition of done, or I don't know I feel like even if it was there it was always hard to actually get done. So that ended up being a bit of a sprint at the end, which is something that we want to get better at in the future. So what we're looking for is there on the top is that big companies, we recognize had their perks and so I guess one of the decision decisions that continued to serve us down the line throughout the project was upfront we decided to use Google's Anthos product for our control plane central control plane. And I said it really served us because we always had support. There was continual product inner iterations occurring to Anthos I mean this was, you know, a big product for them it's it's only a couple years in now for them. And ultimately right it's like an amalgamation of a number of other services that they have already built for Kubernetes support but the continual support given by the team that solved not just the needs that we were coming up with but even looked around and some unforeseen problems which we hadn't considered, you know when we did bring it up to the Google team, they were already considering it as well such as one was you know deploying to a customer in China where laws and regulations were vastly different than deploying here in the US for us. So this this just was you know some of the things that we learned and wanted to highlight as the lessons that will take forward to other projects as well. It doesn't mean that it has to be done this way, but you know ultimately was was helpful. So that is it. We won't ask for questions because it's not live. So all we can really say at this point is thank you, and appreciate everyone's time here and pretend participation. No, so thanks all from Accenture and from Hillary and John myself and have have a good rest of your conference. Bye. Thank you.