 Hi! Good morning, good afternoon, good evening everybody, wherever you may be around the world. Thanks for joining in for my talk today. I'm going to be talking a little bit about how to manage data consistency among microservices with the BZM. So the agenda for this talk is going to be similar, a little bit similar to the following. Talk a little bit about what are microservices, how have microservices architecture really fundamentally changed how we design and architect our applications, what are the challenges around how we currently manage data around our services while we've been trying to move to this new paradigm shift of using microservices as well as some of the solutions that we've seen in the industry today and some trends moving forward, as well as propose a new paradigm of thinking with using change data capture with the BZM to solution for some of that. And if time permits, hopefully I'll be able to be able to share a quick demo with the BZM and kind of show you how it looks, how does it work. And then finally, hopefully I'll leave some time at the end to be able to answer any questions that you may have. So with that, there should be a text box on your screen where you can enter any questions that you may have. So while I'm going through this presentation, please feel free to go ahead and post any questions that you may have there. And at the end of this presentation, hopefully we'll have time to be able to go through and answer some of those. So without further ado, before I continue the rest of my presentation, just a little bit about myself. A quick introduction. My name is Justin Chow. I am a software slash DevOps engineer working at Optum, which is a healthcare IT company under United Health Group. This is my second time attending the open source summit. My first time was last year in San Diego, and I was just blown away by the open source community and everybody and all the interactions that I had there that I just had to come back again for a second time. And I was fortunate enough to be invited back a second time as a speaker this year around. And so very excited and honored to be here speaking with you all. I did graduate from the University of Texas at Austin. So I was very much looking forward to going back to my alma mater and hitting up on my favorite breakfast taco joints. But unfortunately, it doesn't look like that's going to be able to happen, which is probably a good thing seeing as Texas is quickly becoming the new epicenter in the United States for the coronavirus. So I'm glad we're all still able to meet together virtually. I think the Linux Foundation team and the platform team have done a tremendous and amazing job at completely restructuring the way this conference is being done onto a virtual platform. If you all haven't yet gone over on Slack and given them a quick thanks, please be sure to do that because they have worked tirelessly to ensure that this conference goes off without a hitch. So real quickly, I have my LinkedIn down there. If any of y'all want to connect with me, feel free to connect with me on LinkedIn. So continuing on a little bit about microservices, right? So I'm sure most, if not all of us have heard about microservices, but just make sure that we're all talking on the same page here for the rest of the presentation. I thought I'd go through and do a quick, you know, brief introduction on what they are and why they're so big in the industry today. So some five pros for why we're moving to microservices. Number one, you get improved fault isolation. And so what that means is what we've done is we've broken down these large monolithic applications into smart, compartmentalized, separated services. So if any one of those microservices, for example, fails, we have some level of fault isolation where we don't use the entire application. You know, the entire system doesn't go down. And we could do so through the use of service meshes, through integrated circuit breakers across these distributed services, right? Things about nature. Number two, we're able to eliminate vendor and technology lock-in across our services. And what are some of the advantages that that gives us? Well, number one, right? So now that you have these separated, isolated services that can be developed independently from each other, you can choose whatever technology stack you may want to use for your different services. So say you may have a Java service that you're trying to leverage the Spring Boot framework for a particular capability versus you're trying to design another set of solutions that you may want to use Scala or Python or what other languages that you might want or whatever technology stacks that you may want. You can do that because you're able to separate these services out, develop them independently and deploy them independently with this kind of architecture, right? So thirdly, this promotes ease of understanding. And so what I mean by that is when you've got these services that are separated from each other, you get smaller code bases, right? And smaller code bases are easier to understand if you're trying to onboard new developers to your team. And along the same lines with that, with smaller code bases and less scope within those code bases, you get decreased dependencies and risks, which lead to smaller and faster and more iterative deployments, right? And so that's the idea that we get behind CI CD and DevOps processes. And so those go hand in hand together very nicely. And finally, the fifth point I'd like to make is scalability. Because you've got all these services that are separated from each other, you're able to now more easily scale out the services that you most need at the appropriate times as opposed to trying to scale out the entire application as a whole, right? And this can lead to tremendous cost savings, right? Because now you can, if you've got monitoring and observability in place, right? Identify where are the bottlenecks in your microservices architecture and selectively scale out those bottlenecks and those services at the right time rather than having to scale out the entire system as a whole, right? So those are just some five pros for why we use microservices. Let's talk a little bit about how we've come to design these microservices architecture, right? So how do you know which services to separate from another service and develop them independently? Well, largely in the industry today, we've seen the idea of using the idea of domain-driven design. And this is, I think, quite an old line of thinking which might actually predate before microservices architecture, but it's still very relevant today in how we design our applications. And what domain-driven design basically means is we're modeling our domains and our services around business use cases. So using independent problem areas or, as you'll often hear, these bounded contexts around these business use cases to isolate and decide which services to separate from each other. So to make that into a little bit more of a concrete example, what you would be doing is decomposing these large monolithic systems into separate distributed services that are built around these independent bounded context business domain capabilities. So say you have a business capability under domain A, for example, that you need to solution for, you would develop a specific service, we'll call it microservice A for that. Likewise, if you had another independent bounded context business use case, we'll call it domain B, you would design a set of services, or microservice B for that particular domain and likewise for C and so on and so forth, right? And this way of thinking has really fundamentally changed how development teams form because now what we've seen in the world of moving to agile and scrub teams and things of that nature, we're forming these teams in a way where they can be independent of each other and each team autonomously owns a business domain capability and that has really sped up the way we do application development because now each individual team is responsible for understanding particular bounded context and independent problem area of the business that leads to better understanding of the business requirements which then leads to better solutioning which then leads to faster development time and you get the idea from there on there forth, right? So that has really fundamentally changed application development. Next slide. But what about the data? Because while we've been moving to the idea of using microservices and developing these solutions for these independent problems for our businesses and autonomous teams, the data ends up looking a little bit like this where we've got all these different microservices across different domains but they're using a shared database, right? There are a couple of reasons for how we've come to end up where we are right now, right? So traditionally, some arguments for using a single database include well, it can be difficult to engineer data backup and restore procedures if you've got multiple databases, right? So having a centralized location for your data, having a shared database just optimizes for that problem. Number two, maybe you've got a team of DBAs who may be constrained resource-wise and can only maintain a single database and therefore you're constrained to using a shared database architecture across all the services that you're trying to manage. A third reason might be data replication, right? So it's easier to be able to manage data replication and high availability of your data needs if you've got everything a single shared database rather than having multiple different databases, right? But what are the challenges that come with using the shared database kind of architecture? So if you're a seasoned developer, some of these might resonate very much with you but development time coupling, right, to give you an example. And what I mean by development time coupling is let's say, for example, a developer is working on a particular service, service A, and some of the changes that need to be done for a feature that the developer is working on requires some schema changes in the database. Well, now he has to or she has to have coordinate with developers in domain B and C, so microservices B and C, for those schema changes. And so that can cause a lot of dependency as well as it slows down development time because now you have to meet up with those developers and make sure that any schema changes that you're making doesn't affect them or it does affect other services. You have to make sure you coordinate changes with those other services before you release the service into production. And a lot of times in large enterprise corporations, if you're working on these large-scale to agile release chain environments, for example, we may introduce scrum ceremonies, like scrum scrums kind of meeting that scrum masters from each team meet together once a week or so to discuss these dependencies. That has been a solution that we've seen in the industry today, but it still, the problem remains where we've got that dependency and risk that slows down development time. Number two, we get runtime coupling. And so what I mean by that is let's say, for example, Service A has a long-running transaction that holds a lock on the database, whether that prevents other transactions from other services from being able to, say, access that particular database at runtime. And so you get those dependencies and risks as well. And finally, the idea of vendor or technology lock-in. And so similar to what I alluded to before with market services, if you're using a single database design, one single database might not satisfy the data storage access requirements of other services or future services that you're trying to solution for. And so to give you an example, Service A might benefit very greatly from a relational database management system like MySQL or SO to serve its data and provide a particular solution for that business capability. But what if you're trying to develop a recommendation engine and you're trying to develop a microservice B to be able to serve that capability, and you might want to use a graph database for that recommendation engine? You can with a shared database design because if you're already using MySQL database for your shared database, you're stuck with using MySQL as your shared database design. Likewise, if you're trying to use NoSQL for microservice C or even if you're trying to build out, I'll give you another example, search capabilities. A lot of times the popular tool of choice for building out search APIs and capabilities is using an elastic search or something along those lines or a leucine. And if you've got a shared database design, well, again, you're stuck with the initial database that you chose. So what are some ways that we've seen developers kind of cope with this constraint, some of these constraints, right? So let me click on the next slide here. Well, one idea is to use service interfaces, right? This is nothing new. We've seen the idea of using service interfaces and service-oriented programming for a very, very long time now. And the idea is to encapsulate the data access in a service interface. And what this enables you to do is it provides opportunities for reusability with other service modules that you may be building, which is great, right? But the challenge with using service interfaces is, one, service interfaces inherently hide the data, right? And sometimes you don't want that. You want to be able to have the freedom to slice and dice your data like any other dataset. For example, if you were trying to use data for some sort of machine learning or AI, right? Data scientists like to be able to access that raw data to be able to manipulate it and massage it as they need to. Number two, scalability. Service interfaces aren't the most scalable because what ends up happening is, as you're growing out your service interface and exposing an increasing set of functions, sometimes it starts to look like its own homegrown database, right? And potentially becoming a monolithic service in and of itself, and it becomes very difficult to maintain and manage and develop upon. And data volume amplifies this service boundary problem even more because the more shared data is hidden inside that service boundary, the more complex the interface will become and the harder it will become to join datasets across your different services. So can we do better? Are there other solutions that we've thought of? So another idea is, well, why don't we apply the idea of having separate domains for our services to the data as well? So have these column micro databases, if you will, and then have this sort of architecture where each domain holds its own data and each service is responsible for its own data, right? But the challenge here is more often than not, services aren't so isolated and independent from each other that you can have this kind of architecture without needing data from other domains. So what I mean by that to put it in other words is more often than not, a service in domain B will need data from another domain in order to fill the solution that it needs to serve that capability. So what is happening is you start doing these ETL processes to extract and move whole datasets across your domains and across your services to be able to reuse and share data across domains as you need them. And what ends up happening is because different services make different interpretations of the data, which means that they keep that data around and that data is altered and fixed locally pretty soon, that data doesn't represent the source dataset much at all. And what we end up with at the end of the day is divergent data across our services and divergent data is very difficult to fix and retrospect. Nobody likes back data, so can we do better? So if we take a step back and summarize a little bit of what we've talked about so far and what are the requirements that we're trying to solution for here, they are number one, we want some decentralized approach on how we are able to access and manage the data much like how we are managing our services today. Number two, we still want some degree of centralization so that we're able to maintain that golden record so we're able to know what the source of truth is if we're trying to share data across our services. And number three, we want the ability to maintain data consistency across our distributed systems if we're trying to share data across our services. We don't want to be reliant on large ETL batch processes that only happen every 24 hours or once a week and waiting on those pipelines to complete and finish before we're able to ensure we're using the right and most up-to-date data for our different services. So I was reading a blog post earlier in the year, maybe I don't remember how long it was ago, but it was a post by Zomar Tagani who at the time was a principal technology consultant at ThoughtWorks and she had proposed the idea of okay let's apply the idea of domain-driven design in bounded contexts to the way we understand data ownership. So having those micro databases that sit under each business domain capability. But at the same time, make the data for each of those domains available through a shared journal, a centralized journal or a log if you will so that other services can access that shared data. And so what this means is now you've got this decentralized approach to how each service is able to manage its data locally and optimize that data locally for its own use. So domain A can choose whatever sort of database it wants to use. Domain B can choose whatever database it wants to use and so on. But you've also got this centralized log or journal where your different services can subscribe to if you are consumed from for any data that other services might be trying to make available. And so what this means is more often than not this kind of journal, this distributed log we see implemented using Kafka as a streaming platform and so what ends up happening is you might have your services domain and domain A making their data available through streams to the log through a streaming platform like Kafka. And what this provides then is a way for us to be able to scale out the underlying way of how we're able to access the data because Kafka is a pretty solid platform that's been tested many times over the years in production. We know that it's scalable. We can consume from it and produce to it with hundreds and thousands of different services. And it's also retentive and replayable because you've not only got this platform where you can subscribe to for data changes but those data changes can be kept in Kafka and can be replayed from any point in time if you're trying to say create new services. And you want those new services to be able to access previous data. And so what this introduces is this idea of stream processing that looks a little bit like the next slide like this. And so what does this actually look like? Does this mean that each service needs to publish data changes not only to its own database but also to this centralized journal or log. And is this what it looks like where you have this idea of dual rights where you have to not only write to your own database but also write to this journal or this centralized log. Well if any of y'all have kind of worked with database designs before you know that dual rights are definitely not idea because what ends up happening is if your first transaction to your database succeeds but your second transaction to the second data source in this case that distributed log that streaming platform fails well now you're back into the issue of having data inconsistency across your services which is what we're trying to solve institution for in the first place. So we don't want to use dual rights not idea. Can we do better? Can we solution for this? And this is where the BGM fits in. So the BGM is a set of connectors that can be deployed via Kafka Connect APIs and where it sits is it helps interface between the database and Kafka or the distributed log. And so what this eliminates is the need for that dual right problem because now your services only have to concern themselves with writing those data changes to their own database and then the BGM picks up on those data changes from those databases and propagates those changes down to that shared journal to that distributed log if you will. In this case Kafka. And the BGM does this with very low delay it also captures all data changes so that includes any creates any inserts any updates any deletes and it also requires no changes to your underlying data model. So great solution. Little bit deeper dive into its architecture though how does this actually work? How does the BGM listen to and access those changes and knows when those changes come in from the database? Well the way the BGM works is it listens to or it yeah it listens to the log files the transaction logs that can be written to whenever there's a transaction that happens against that database so in the example of my SQL it'll listen to the bin log. I'll show you a little bit but that looks like a demo later and then what happens is the BGM takes those changes and then propagates those it creates any change event to a Kafka topic. And so now this change has been sent over to your Kafka and it lives in a Kafka broker on a Kafka topic that can then be consumed from consuming market services or even other data sources as well. And so some of the connectors that the BGM currently have work with MongoDB MySQL, Postgres SQL SQL server and there are three that are currently in an incubating status I believe Oracle, DBT and Cassandra. So most of all the popular databases that we see everybody using today the BGM supports. So next slide there we go. So this is another architecture diagram that I just pulled off of the the museum documentation it goes into a little bit more detail into what the BGM actually does. So let's say you've got service A and service B service A is using MySQL service B is using Postgres and you want to be able to capture any of those data changes so that you can propagate those changes or consume from them and other services to maintain their consistency in another service or even replicate that data for any other purpose so what happens is the BGM you deploy it as a separate component and then you have it listen to the transaction logs from your database and then as those change events come in it will create a change event that gets sent over to a Kafka topic and by default the BGM will create a separate topic within Kafka per database table you can of course change that with the multitude of different configuration options that you can provide the BGM with but by default that's what it does and then from there you can either consume it with a MySQL service that you may have yourself like shown in the previous diagram or if you're familiar with Kafka Kafka also provides a wealth of additional Kafka connectors so you can be using these Kafka connectors as well to stream that data to downstream data sources so that may be Elasticsearch, maybe another MySQL database that another service may be using it could be a data warehouse whatever you want and so what does this change event actually look like to provide you with another example I again just took this off of the BGM documentation but let's say we had a update statement that was made to a MySQL database so on the left hand side of the slides let's say you had that update statement we're updating customer where we want to update the first name of that customer well what then ends up happening is the BGM will capture that and it will create again that change event that gets propagated down into a Kafka topic that change event looks like what you see on the right hand side here where it will give you the schema so not only would it capture any data changes it will also capture any schema changes as well but then if you look in the payload section here we've got a couple of interesting points here so it gives us the before picture for what the data look like before that change happened it then gives us the after picture where it shows us now what does the data look like after that update command has been made and finally thirdly it gives us additional metadata regarding where that event happened did it happen in MySQL database did it happen in the Postgres database and then fourthly over here you can just see the OP stands for operation I believe where the U stands for update so this is an update operation and then the time step at which the BGM captured that change event so it gives you a wealth of information so you can use this now in any of your downstream services or consuming services that are subscribing to these change events to propagate those data changes and update those data that they may be needing locally for those services as well so how are we doing on time yeah I think we got time for a quick demo let me go ahead and share my screen okay so how does this all actually work in real life so I'm going to hope that demo works the time demo has never worked for me but what we're going to do here is I've got a Kubernetes cluster running on Azure so it's just Azure Kubernetes service and if you look on the look on the left hand window here you'll see I just ran a command to get pods right now I'm just running a single broker Kafka broker single node and associated zookeeper just very simple single Kafka cluster and what I'm going to do here now is I'm going to go ahead and create a MySQL database so go ahead and navigate to my deployment manifest so this is what it looks like pretty simple if any of y'all have worked with Kubernetes manifest before this should look relatively familiar to you I'm just putting a very generic basic MySQL image from Docker Hub and I'm going to go ahead and apply this manifest so that it creates a MySQL database and you should see on my terminal window here that MySQL database coming up and it is now running take a look at the logs see it bootstrap itself and it is now ready for connections excellent so if we exec into this pod real quick and log into MySQL see that it's a very basic vanilla out of the box MySQL instance only just deployed no data in it at all so what I'm going to do now is I'm going to go ahead and populate it with some data so I've got a very simple SQL statement or a couple of SQL statements here that will create a couple of data tables that will create an inventory database that we can use for the rest of our demo so I'm going to go ahead and capture this copy and paste so now if I do a show databases there we go we've got an inventory database and we do a show tables there we go these are the data tables in our inventory and if we do a select from customers there we go those are the rows in our customers table in our MySQL database cool so now we've got some test data to work with what we're going to do now is deploy that Debezem connector so that we're able to start capturing any data changes and propagating those data changes to our Kafka broker so I've got a manifest file here for Debezem there we go Debezem makes Debezem has a couple of images available on Docker Hub so right here you can see I'm just putting an image from Docker Hub for Debezem connector go ahead and actually you know what before I go ahead and create the connector let's take a look in Kafka and see what the topics we have what are the topics we have in Kafka so go ahead and use the Kafka console consumer to list out the topics that we have so fresh Kafka installation right there's nothing there no topics yet that have been created so what I'm going to do now is go ahead apply the manifest file to create Debezem and again you should hopefully see that part come up there we go Debezem is now running if we take a look at the logs if we take a look full logs we notice a couple of interesting things so what Debezem would do when it first starts up is it will create a number of Kafka topics to store metadata so the offset as well as any storage that it needs and so see if I can show you that and the logs here where that actually happens so if you look at the logs here what it does is it creates these topics against that Kafka broker that I currently have it pointing to and so now if we take a look at Kafka and we list out the Kafka topics you'll see that those topics have been created the config topic, the offsets topic, and the statuses topic so we have Debezem running what we need to do now is bootstrap it with a connector so come back over here to where I have Postman and actually what I'll do is minimize my terminal window so you can see the logs running side by side here so those are the Debezem logs on the right hand side of the screen the way you bootstrap a connector for Debezem is through an API that gets exposed so if we currently just want to get against the root the URL you would just get the version that we're running for the Debezem connector we can also get the number of connectors that we currently have registered for Debezem and we don't have anything right now because we only just deployed Debezem what we want to do here is go ahead and create a connector an inventory connector of type MySQL so it's MySQL database so we want to use MySQL connector and we've got a couple of configurations here for being able to access the MySQL database for simplicity for a presence for this demo we're just using the root user of course in production you would have a separate service count that you would give the requested permissions for it to be able to access the database and we are also telling it to listen against the inventory database which we just created for any database changes so what we're going to do here is now execute this post request with that JSON body and you can see the logs are running on the right hand side of the screen and what just happened well to summarize when you register that connector there are a couple of steps that Debezem will do so you can see here I just highlighted step 0, 1, 2, 3, 4, 5 and so forth and so on so if I go ahead and maximize my terminal window again so it's easier to see so I'm just going through these steps it connects to the database and it finds all the available tables for that database so it found if you remember in our show tables query that we ran earlier we had a couple of those tables that we had created in MySQL, Debezem will pick up on those tables so the addresses customers, orders, products UAM, products at hand and it will start listening to them at the same time at the same time it will go ahead and create those associated Kafka topics per each data table so if you take a look over here in our Kafka window and list the topics for Kafka you can see we've got now new topics that have been created again like I said by default Debezem will create a Kafka topic per database table so you can see we've got one mapping of a Kafka topic here per database table so addresses, customers, UAM, orders so forth so on so now that we've got Debezem running and we've got MySQL running let's go ahead and execute a change against the database let's go ahead and quit those logs and we'll exit back into MySQL okay so we've got those tables in our inventory database what we'll go ahead and do is run a insert command let's see here, I'm going to go ahead and exit out of that and what I'm going to do is run a Kafka consumer so we can see those changes coming in in real time so I'm just using the Kafka console consumer here on the left hand side of the window listening against the inventory customers topic because what we're going to do here now on the right hand side is go ahead and execute an insert statement to insert a new customer so do a select from those are current customers we'll go ahead and insert into that a new row and you can see in really quick relatively real time, Debezem has captured that change so if we do another select statement we've got a new customer here called Kenneth Anderson and Debezem has given us that JSON change event that I showed previously in my slide or what that change was right likewise if we decided to do a delete command for example and we did a delete from customers where last name equals Kenneth Anderson there we go there is another change event that Debezem profits over to Kafka right so let me go ahead and end the demo there and stop my screen share okay so in conclusion right so we talked a little bit about the advantages of market services but at the same time talked a little bit about the challenges with how we've currently ended up developing our applications most often than not we're still constrained with using shared databases and how that has affected really the way we're able to bring agility and speed to our development processes as well as how we're able to solution for new business capabilities. We talked a little bit about the idea of this paradigm shift for domain driven data ownership across all of our services for each business domain and the way the solution for that is through a centralized streaming platform like Kafka that provides this shared journal or this log that is number one scalable because it can be distributed in the case of Kafka and also retentive and replayable for new services and brings the ability to be able to develop our services and data more autonomously and finally we looked at Debezem and how we're able to leverage this technology this change to be able to capture any change events that happen across our distributed set of services and be able to propagate those changes to any other services that we need so I'll go ahead and there and take a look at if we have any questions I don't see that we have any questions come in but if anybody has any questions please feel free to go ahead and start sending them in. How are we doing on time? Five minutes I think? I think we have five more minutes. We got one question. Will slides be available? Yes. I will be making the slides available on SCED as soon as this presentation ends. One other thing I forgot to note is now that we're using well if you're to go ahead and go down this architectural route right if you're using change to capture and streaming your changes to a streaming platform like Kafka this also enables you to do a whole host of additional opportunities with using what's the term stream processing staple stream processing mind blank for a moment there and so I believe there was a session that was yesterday about learning how to do event stream processing with Pac-Man by Ricardo Ferrell was actually in that talk very interesting talk and it provides a whole host of opportunities for how you can enrich your data with staple stream processing so if you want to learn more how to encourage you to listen to that recording later as well okay I did not scroll down to the questions okay let's see here next question is if a service is reading the changes from DB log from topics that are published by another service that is trying to update it's on DB during such situation how does the BGM guarantee that these do not get published to topics that can be I'm not let me see if I can read this question if a service is reading the changes from database logs from topics that are published by another service and it's trying to update it's own database very such a division not quite sure I understand the scenario of this question not quite sure okay so I think what the question is asking is what happens in the event where you might have any cyclical dependencies or updates that happen I guess database I don't think that should be an issue because the way the BGM works is it will capture changes from the transaction logs so in a ideal scenario right the BGM will capture these changes at least once but in the event where the BGM fails for example and needs to restart itself it will either look to a snapshot that it initially first creates and in those situations you may run into issues where you get more than once updates that the BGM sends out right and so in those kind of situations I think it's important that you also think about how your application handles those changes and I would recommend engineering your applications in such a way where they handle them in an idopotant manner so for example using create excuse me using posts instead of puts for example so if there ever is an event where you get more than what's updates for a particular row you don't want duplicate data that gets created for example right so being able to manage those in an idopotant manner is one way of handling loads another question is for a high DB transaction volume service what is the latency factor introduced by the BGM okay so I don't have exact numbers because I haven't been able to run a proper performance test in against the BGM but what I've seen in the past is the BGM is very quick at picking up changes as you saw in the demo but like I was talking about before when you first bootstrap the BGM with the connector it'll create that snapshot view of that initial database so what that means is if you've got a database maybe a legacy database that already has a lot of data in it and you first register a connector against that database it's going to take some time for the BGM to go through and snapshot all the data that's currently in that database and that in my experience has been something to be aware of because sometimes if you've got large amounts of data that snapshot can take a while before it's finally up to date and caught up with the current standing of the data okay I think we're just about out of time but if any of you guys or if any of you have any other additional questions I will be on Slack so please feel free to go ahead and hit me up there yeah so with that I'll go ahead and conclude this talk again feel free to hit me up on Slack if you have any questions or I just want to talk about any other cloud DevOps processes in general I'm more than happy to talk with you guys on Slack so thanks for joining guys