 Hi, good afternoon everyone. I guess most of us are settled, so we'll proceed. So hi, my name is Rahul and it's my colleague Aman. And we work with Cisco on their service provider solutions. And there are two more people who contributed to this, but unfortunately, they're not here today. So basically we have a solution which does a lot of things in the back-end with the switches and does the configuration and get some services on. But what we're going to talk here today is the whole bunch of back-end services that we built and they run on any cloud, so OpenStack is one of the supported ones. So we started with a lot of iterations and we learned a lot of things. We wanted to design our back-end services in a cloud native way. And over the time, we did a lot of things wrong. We had multiple iterations. We used virtual machines where we used to deploy the microservices that were there. And then we moved to the container model. So a lot of learnings in the process. And so we would be discussing some of the entire patterns and the design patterns that we learned out of it. So I'm sure most of you would have gone through a lot of them. And the ones that we could relate to our work, we were going to discuss most of that content. And to start with, I'm sure we should have a homogeneous, I mean, a heterogeneous crowd where some of the people would be new to cloud native. So just a couple of slides and people who have been working, I mean, just bear with us. So first of all, you know, what is cloud native application and why are we doing it? Why do we hear it so much? Why is the market attracted to it so much? So cloud native is basically a software design architecture pattern where the applications are designed to run and they're made to run in a way where they can reap the maximum benefit of what the cloud would provide. I've tried to list a couple of points below, like being distributed, scalable, multi-tenant, and platform independence. So if your design inherits these things, it would be, personally, I would call that as a cloud native application. So it's different from a cloud-enabled or a traditional monolith application. So how is it different? So the internal components of the system can scale well and are inherently distributed. It is lightweight and, I mean, with the usage of virtual machines, containers, and different kind of, you know, advanced deployments. They try to make the deployment lightweight and, you know, DevOps and continuous integration-based deployment so that whatever you write reaches the production quickly. So monolithic versus cloud native, so there are so many things that change. So why are we actually moving towards cloud native? So that is because we can easily achieve scale resilience and, like, upgrades and so many things seamlessly. So let's just take an example of a monolithic application under load. So you have an application which has so many components and they're very tightly coupled and you have, you know, like a monolith. And under regular load, you would have something like this. And as soon as you increase the load, one of your components gets stressed. Now this is the only way to actually scale up. So you have to replicate the entire stack and probably put it behind a load balancer and which we know is not the most optimized way of doing it because it's just that one component which is getting stressed and not all of them. Why do we have to replicate the entire stack? So this is what you would be doing in a monolithic application under load and you would distribute the load and then things work fine and come back to normal. So instead the new cloud native application approach, you would, all these blocks that you see would be like a microservice. We'll come down to people who would have a question, you know, do we only need a microservice to be cloud native? So we'll come down to that, but I mean, we know it's a well-known approach. So these cubes over here are your microservices and basically components of your application. And as soon as you increase the load and you stress the system, something like maybe your auth component authorization, that's beaten up really bad, right? We have seen that everywhere. So maybe you want to replicate that, put it behind a load balancer connected to the, you know, messaging system and then you have things up proper now. So everything that we have, I mean, going back, I talked about resiliency, I talked about scale, I talked about, you know, tomorrow you want to change implementation of just one microservice. You can do it very easily in a tightly coupled system like monolithic. It's very, very difficult and not so easy. I mean, then again, monolithic applications are of wide category and it depends on how you designed them, how well you designed them. Here you have a structured approach and changing things, they're small. Changing things become very, very simple. So every technology that gets so much traction has the enterprise and people, you know, it makes a business impact why everyone's looking behind it. So let's see what business impact cloud native applications give you. They give you all the cloud advantages that we discussed previously right now. So apart from that, it's very flexible. So most of them are designed in a way where you can just migrate your load across any of the cloud providers. So none of the companies like to be really tied with a specific hardware or a platform, so they want the independence that, at the end of the day, we want to run applications, right? That's the goal, we want to run the software. And we don't want it to be very tightly coupled with the underlying hardware or underlying platform or underlying IS, any of it. So this fits in the model very well and it's collaborative and agile as we discussed DevOps and CI is part of it. So the time taken between you writing the code and that reaching the production after testing is short. So the business like it. I mean, they don't have to wait a full cycle before that goes to your customer. Based upon certain guidelines, so as I said, there are so many guidelines available today like a common one called 12 factor app. So if you follow them, there's a homogenous kind of design and it's easy to pick up and you know how, I mean, it takes people a shorter bootstrap time to get on to this kind of system. Reimplement, replace, upgrade as we discussed becomes very, very simple. Resource optimized and resilient. So, I mean, most of the micro services are moving towards the container approach because of obvious advantages. And that gives you, you know, high density virtualization and it can be easily clustered to provide recovery in case of failures. So, okay, so getting into the technical stuff a bit. So, I mean, if you ask, I would try to design a cloud native. This would be a basic primary cloud native architecture that we would have. We would have a couple of micro services that could be your business logic. You need to have, you know, log aggregation there because these are so many distributed systems running, applications running on so many nodes and so many instances. So, you need an aggregator where you can actually debug things when they go wrong. API gateway for routing your requests internally and health and monitoring because then again, these challenges come along with the distributed system where you, it's very difficult to monitor so many systems altogether. You need a system that could tell you the health and probably even take care of small situations. Discovery service will dwell more upon that as we go forward. And backing resources, when I mean, I mean, when I say backing resources, we say anything that gets connected over the network like a database, a Cassandra, so that. And then, you see interestingly, we have a REST API on the top and messaging queue at the bottom. So, why do we have this? So, these are the two systems that we use extensively for communication. So, any, you know, the REST is natively synchronous. So, any kind of synchronous request would be, and all the public-facing APIs would be usually REST and anything that you want to do, like mostly your microservices talking internally or anywhere where you need a synchronous environment, you would use the message queue. So, going forward, I mean, let's just discuss a point which we discussed earlier. So, do we really need to be like a microservice always to be cloud native? No, not really. You can design a monolith to be cloud native as well, but then it is a big challenge and, you know, as long as it gives you all the advantages, as long as it scales well, as long as you can change things, as long as it is portable, it is cloud native. So, but then there are some guidelines which make your life easy. We don't, I mean, it depends on people. If you want to go to the market quickly, there are some set guidelines. If you can follow them and go to the market quickly, definitely there's another way where you sit with the microlith and try to design better. I mean, but for the discussion here today, we can have, we're just laying out a couple of points that we felt helped us go cloud native fast. So, one being the DevOps. So, what is DevOps? So, DevOps is the collaboration between DevOps and it's not just tools, it's process and culture as well. So, some of the tools that you would be using would be your Ansible puppet chef. Then there's CI, CI would help make sure that the code that you write reaches the production in a very short time and is tested well. Third is the containers. I mean, we can always have virtual machines or any other infrastructure service as well, but containers as we see are, with the evolution of Docker and the toolkit that comes along with it, Docker rocket, these have become much better for the application developers to write their applications on top of that. And with the low footprint, it really solves the purpose that the virtual machines were solving very well. So, we see that using containers gives you a definite advantage. Then is the fourth thing, which is your core logic, which you would put as microservices. So, basically all the other three that we discussed earlier were part of process or infrastructure, and you choose one out of the many available options. And microservice is basically that area where you can actually innovate and you can follow... This would be something that you would be designing. So, this is where we would take, I mean, a deep dive, and Aman would be going through a couple of anti-patterns and design patterns, which we saw worked good for us in our product. And, I mean, then again, there are so many. There are hundreds of design patterns, but these are some of the patterns that really helped us do our stuff well. Thanks, Rahul. So, the first thing is the 12-factor app. So, to design a microservice architecture, there are some set of guidelines. These are the best practices that we can follow to design a microservice architecture. So, there are 12 factors available on our 12factor.net website. I'm not going to discuss all of them, but let's see a few of them. So, the first is the codebase. So, the 12-factor app says that there should be a version control system being used in designing a microservice architecture like Git or SuperVision ATC. So, the idea is traditionally, monolithic application also had the version control system like Git or something, but the problem there was, for each and every component, there were different codebases. But with 12-factor app, the entire application will have a single codebase, and the deployment can be multiple. A single codebase can be used to deploy a development environment or a staging environment or a production environment. So, this is the idea behind codebase. I'll go about build, release, run. So, the build cycle says that the code can be bundled into executable. Then there is a release cycle which can put the current deployment config, and then there is a run cycle which can actually go ahead and deploy on, like, the environment can be different. As I said, it can be a production or it can be a staging or development, ATC, because all the environments are almost the same. Third, I would discuss a very important point is our depth broad parity. So, what used to happen in monolithic applications is that there were differences between a development environment and actual production environment. So, developers were using a lightweight database access while the production cannot use that. So, with the 12-factor guidelines, you have to keep the same development environment as you have a production environment. You can use the DevOps thing where there is a collaboration between the development and operations, and basically the difference between the development and production should be as minimum as possible. So, you can read about that. Any question we'll discuss after the... because I don't have that much time. Next, we'll discuss about some of the design patterns of microservices. So, these are not all design patterns that we are going to discuss. There are a lot many on the web, but what we are going to discuss is the design patterns that we saw in our cloud-native application when we were working on that. We found out some anti-patterns as well as some design patterns as well. I'll start with them. So, the first one that we have identified is a fragmentation pattern. It plays fragment as you scale and the advantages that fragmentation gives. So, what happens? Whenever we see that there is a cloud-native application, we talk about cloud-native, we start designing in a microservice-based architecture. But not all components are worthy of being an independent microservice. It takes resources. It's difficult to manage. It's not that easy to manage. You have multiple pebbles in there. You have to manage ease in every one. More microservices means more stress on your network. Traditionally, when you start thinking, you start thinking, I got so many business units, and I have these business functions. So, let me start putting all of them in a separate microservice and then I have logging this, that. So, when you start, you have 20, 25 microservices. And then as you scale, you find that this is a concern. I need to split this. It's basically a sprawl. After that, it just becomes management, becomes difficult. So, yeah. So, the best way to start with a monolithic approach, I'm not saying to write a monolithic application, but approach should be monolithic, where we need to identify what all services are worthy of extracting as a different microservice. So, let's see with example. So, there are four components in our application, and there are... So, going forward, we identified that the component two is the one which can be stressed out. Take out component two from the application and scale it independently. So, this component two can be the first microservice in your application. So, this is what fragmentation pattern is. Next is a resource adapter pattern. So, most of the time, we identified that the service has a public endpoint. So, what happens in that case? Anyone outside of your... Outside the application can access... can directly access the service. This is a major security concern that users can get access to the service. So, here comes the resource adapter pattern where you're not exposing the public endpoint of the service directly to the client. Instead, you'll be having an adapter service in between the request and the service. The public endpoint will be shifted to the adapter service, and the actual business logic service will have an internal endpoint. The benefit is that the adapter service can validate the request coming in, and only the legitimate request can be forwarded to the internal one, and rest of them can be denied. It actually... It also does some sanity test where the adapter... where some request which should not go to the service logic is handled by the adapter service itself. Yeah. Yeah. Next is a anticons pattern. So, how we all started learning coding in... when we were kids. We started with hard coding everything in a file. Then we moved ahead. We started using variables. We started using stacked up variables, and then those variables were used in the files. Similarly, with the software development process, in the very old days, we were using constant files. Those constant files were part of our code base itself. These files were used to... these files had all the hard coded IP ports and everything were there, and other files were leveraging the constant file to know more information about that. Then we moved to confile approach. So, where everything was written to see confile kind of a thing. So, with this approach also, since we are currently in a distributed widely scaled environment, the problem is we cannot handle everything with a confile because there are so many changes everywhere. It's widely distributed. Yeah. The system is so dynamic. In a cloud native, let's say you are auto scaling, you are bringing up services on demand, and then how do you go and configure the account files. So, this was a problem when we started going to the cloud native approach, and hence we had to get away with that and put as much things in the environment variables. And you read... So, the solution to this problem is to couple two things. First is the use of environment variables. So, everything would be... most of the thing would be running as a runtime environment variable. So, let's see with an example. So, here I have two services, service A and service B. Let's say service A wants to talk to service B. It doesn't know how to reach service B. So, one possible thing is it can have an environment variable where service B, IP and ports are exposed. The other thing is there is a central discovery service called console or something, which is a distributed lookup service which contains information about IP port and some other metadata as well. So, this console information can be exposed as an environment variable and service A can query the console cluster to know the service B location. Console cluster can reply back with the IP port and what other metadata it needs. Then directly service A talks to service B. So, this is the cloud native approach for... between services. Next is a circuit breaker pattern. So, most of us have used a retry kind of a pattern where in a client server model we are trying to send a request and there is no response back. We try to send the request again after a certain timeout. But let's think about a scenario when every time you are sending a request after a certain timeout but you are not getting the response back. So, there is a problem. How do we solve that? So, there is a circuit breaker pattern for that. It says do not try if the request is failing continuously. So, let's see how does this work. So, it also uses a central discovery system. So, let's say service A is trying to send a request to service B but the response is not coming back. It does some retry it tries to send the same request after a certain timeout but the request is not coming back. So, what service A does it notifies a central discovery service that B is down. Now, if any other service let's say C is trying to talk to service B it will first query the central discovery service if service B is up or not. It will get a reply that B is down wait. This is the pattern. Now, to recover from this situation you can have multiple approach. You can either scale service B or you can replace or whatever. That depends on you. But the pattern says in such kind of a scenario your request should not fail. If some component is not working the other application should be working seamlessly but the application should not the application should keep working. So, basically you go to the plan B and if you want to show service unavailable a previous version of the service works and then you can bring up that and let it serve the request for some time. So, there's so many approaches for the firefighting but I mean that is after this. So, this is like an advanced retry pattern. Yeah, right. Thanks. So, the next thing is lock correlation and aggregation pattern I would say. So, what happens in a micro-survey is what used to happen. So, my app was deployed on a single machine. So, to manage the logs it was easy. It was relatively easy. Now, with the use of micro-services we have distributed micro-services and a client request can come to the first micro-service and then it can the lifecycle of the request can go to different micro-service and all. So, similarly there can be multiple service requests coming from the client side each and every micro-service will generate their own logs. Now, if we want to debug a certain service request how do we do that? It's very difficult because we have so much of micro-service so much of logs generated out of micro-services. So, what this pattern says you tag an incoming service request with a correlation ID and throughout the lifecycle of that service request in between any micro-service whatever the request goes the same correlation ID will be carried and the logs which are generated from micro-services will also carry the same correlation ID. It can be coupled with a log aggregation pattern as well where a log aggregator can pick up logs from different micro-service and project it at a single repository. Yeah. So, now these five were some anti-patterns that we see that they're not normally these kind of patterns are implemented. Now, we'll move to some regular patterns. So, this is the first pattern this is related to design and scalability. So, this is a leader election pattern. So, multiple instance of same micro-service should always elect a leader. So, what happens? Multiple instances are there, they are sharing a same resource. So, there is need of someone which can coordinate this resource sharing process. Leader election patterns comes in. So, out of these three instances one will be selected as the leader. So, the leader election can happen based on the lowest instance ID, lowest process ID or there are multiple algorithms for that. So, one important thing it is important for all the instances to keep polling the leader because anytime the leader goes off this system goes off. So, in this case either the non-leader instances keep polling the leader or there can be a sub-leader as well. Next is a Q based messaging pattern. So, this is related to the availability. So, it says instead of sending the request directly to a service use messaging Qs. So, what happens? A client is sending request to the server. There is a micro-service which takes a request. So, in such kind of a scenario the client request is low for some time the request is very low but suddenly it spikes up. So, definitely the service can scale the micro-service can scale but that particular spiked up request is not handled properly. So, in this case we use a message Q where at the input of the message Q even if the request is spiked up the output of the message Q is always flat. It may be high, the flat the flat thing may be high or it may be low but it's always flat. One problem with this approach is that it cannot be used at a point where we need the response to come as soon as possible. Because it's asynchronous. It's in a queue. So, if there are some other requests to be served it will be served first. This is one problem with this approach but most of the places it works fine. The next is query and update segregation pattern. So, this is related to data management. So, it says that the database schema for read and write operation should always be different. So, what happens? We have a same schema for read and write model. Let's suppose that we have a same schema for read and write operation. So, the same data transfer object will be queried from the service. So, this causes a mismatch sometimes because sometimes you are writing something, you are doing a write operation, you are adding some columns but the read operation is not able to read the added columns. And also it can cause contention issues where the resource is locked for writing and you are trying to do a read operation. So, to solve such kind of problems we use this pattern where the read and write model are different and that way the read DTO and the write DTO are also different. Depends on the number of requests. So, the thing is the read model should always be a mirror of the write model because essentially the thing is the same. Now, depend on the request, it can be scaled. We have multiple databases, multiple instance of databases for read and write but the model itself is different. Okay. So, this is the final part of our presentation. I'll let Rahul speak about this. So, then again, we come to a point where we thanks someone. He actually took you through all the design patterns that we identified that were very important with our development and I hope it helps you guys as well at some point in time. And so, another important thing is how do we split the microservices because this is a very basic use case when you're developing with the microservices. You might have a cloud monolithic application and you want to migrate it to a cloud native scenario and maybe other use cases. Let's just go ahead and do this. So, these are the three basic use cases which I could immediately come up with. So, it's like needed when we are trying to transition a legacy monolithic application to a cloud native. We're trying to split a microservice if it is needed structurally or functionally or we're trying to split a microservice which has components which might be bottleneck at scale. So, how do we identify that where do we actually start? How do we split? So, we all know that database is one of the concerns but let's just start with this. So, do not compose the entire application at once. This is sometimes the most common mistake that we do. We take the application and we say, you know, these are the different logics and let's just disintegrate it into so many paths and try to sue them up as quickly as possible. That doesn't quite work very well. So, it should be always like a chip. So, you start chipping the monolith and at the end of the day you can segregate it into many microservices. So, another approach is let's say you know start with building a new feature as a microservice. So, if you get a new feature you start with that as a microservice monolith so that it can talk to the legacy, new microservice can talk to the legacy monolith and identify the areas that would need change in immediate time frame and pick them up for splitting. And then again identify areas which upon changing would not impact other areas. So, these areas are called seams in a monolith where if you change that part that would not impact the rest of the application. So, this is how we need to identify these split areas and there's certain approaches that we take while splitting. So, database is the most tightly coupled part of most of the monoliths that we've seen. So, the schema we start, we really need to start looking at the schema and when we look at the schema the foreign constraints are the major deterrents to start with. So, how do you actually go ahead and take away the foreign key constraints. So, you have to expose an API so that you can get those values for the new microservice when you're trying to split it. Yes, there are some of the problems that come along with it like let's say data inconsistency. Let's say one of your tables has a value missing. So, these are things you need to build as you go and because it would be very case specific and splitting components that share common data. So, let's say you're accessing a database and two of your components are accessing the same data. It could be a static data or something. So, you have to make sure that you split that in a separate microservice and just get a set of microservice which just serves the data. Then the most simple would be like if you have columns in the table and the logic in the code which can be mapped and separated. There might be several replications as part of this but then again it's for the greater good. And another last but not least the important part would be that you have to handle this in a proper transaction and rollback based model because initially when you had a monolith and if some part erred out you would know it very well because it's all contained inside an application. But when you have a distributed cloud native application with so many microservices, let's say your request gets served by three microservices and then you have an error in the fourth. You need to make sure the sanity of the system remains after you rollback. So, this is an important aspect when you are actually migrating from a traditional system monolith to a microservice-based cloud native application. So, yeah, so that would conclude most of the part that we wanted to cover. We would be open for questions and answers till we have time. I guess we should still have some time. Go ahead. Sorry. You started already. My name is Michael McHugh and I work for Red Hat. So, you were talking about injecting information into your applications using a configuration file or environment variables. A lot of times the problems we run into are injecting sensitive information into these applications like credentials to access cloud resources. I'm curious about your thoughts about you don't want to use an environment variable or a configuration file in these applications. What do you guys do to solve those problems? I mean, for us while we were developing, so you can use something like maybe vault or secure things to get a token. But I might not have the best answer right now, but we can definitely connect on this. That's one valid point. It's easy, but for a production environment that's really a concern. I think we do use vault for that, but I don't know whether that would serve this purpose or not. Thanks. Hi. I had a question around compatibility of your microservices. How do you manage it if business requirements are for say, moving in a direction that would require a change across microservices to implement a particular feature? Is this something where you have to do artificial things to make sure that you're always being compatible or is there a straightforward way to manage those kind of changes across multiple services? I think most of this would be taken as part of CI as well. I mean, you start making changes to various microservices and then you have to, you know, integrate to make sure things roll out well. You can do it in a phase manner where you make some changes in one of the microservices and still you have the API compatibility and then you deploy the next one and slowly move over to that. If I understand your question correct, that's what I would be doing. Thank you. Thanks for the presentation. I have two questions. One is around microservices. How do you maintain the relationship between across microservices? So from business perspective, you have a, and that goes to the monitoring part of my other question is how do you monitor the system available from business perspective? Thanks. So I mean, we have written down a component that actually queries a kind of a component that goes and queries the health of each microservice and we have a small kind of an agent inside, agent code inside each of the microservice which gives it all the relevant data. So then again if it is something very general then you can use... So I have one point to add on that. So it happens on two approaches. So one it can happen internally as well. So internally the health monitoring microservice can as Rahul talked, there are some init files inside the microservice which provides the health information. So this health monitoring microservice can talk to that init file, can get information from that init file to know about the health of that particular microservice. And then what next it does is to export this health, the health monitoring microservice exports data from inside to outside. And then outside there are tools which can be used for the other health monitoring perspective. So like as we said we kind of give you collective health so I said initially we have a couple of switches that we go and monitor and this is the back end. So we collect and that's very specific to the deployment itself. But for our case what we did we exported all the data out of the monitoring microservice and then club it with all the like some external kind machine which is taking the data from internal microservice. From the switches we take the health and we just give it collective health only. Thanks. Maybe a follow up on that. So you talked about separating out a microservice that does the talks, reads and writes the data. Is it still valid to have two microservices say write the same data that's stored by another microservice or is that kind of a violation? Sorry can you just repeat the question? So you have a set of data that two different microservices would operate on including writing them. Is that a violation of microservice design or is that okay? I don't think that's a violation of microservice design. I think that's fairly pretty well and that is one of the approaches that we have to take when we're splitting the monolith. Don't you have the issue of... There is a name space kind of a concept so each of the microservice will have their own name space kind of a concept so even if that couple of microservices are sharing the same resource it will be isolated. It will have its own context. Different microservice will have its own context on that. Otherwise you have a dependency on one interacting with the other. Right. So that was one of the points that really helped us. So when you start designing your microservices ground up and you start with so many microservices that's probably eventually go on and happen. That is one of the approaches but then each of the microservice the kind of data you're sending out is totally under your control. So you have to basically make sure that it's a distributed system and it will generate a lot of network traffic so you have to while writing the code itself you have to make sure you write it in a very optimized fashion. I guess we are past the hour. If there are more questions we can actually take it offline. So thanks everyone for your presence here. Really appreciate it. Thanks everyone.