 So my name is Leong. I'm a senior software cloud architect. I work for Intel. I have about more than 15 years' experience in application developments and the cloud-based infrastructures. In the past, I got a chance to work on different type of applications, including peer-to-peer applications, online streaming media applications, distributed architectures, and academic research as well. So I completed my PhD in 2013. So my PhD generates about multi-cloud orchestrations. So I'm also actively involved in various open-stack working group. So that includes enterprise working group, product working group, and application ecosystem working group. So in the past, I worked very closely with the foundation and the community members. And we have published two e-books generate about, talk about what is open-stack and how to implement open-stack in the enterprise context. And this coming Friday, we are actually having another book spring. I'm going to develop the third e-book focusing on application developments for open-stack. All right. So today, agenda, I'll talk about what is a business drivers that we want to move into microservices and APS-centric design. I'll briefly discuss about the architectures. I'll also talk about the open-stack, what kind of open-stack services and API can be useful when coming to microservices application deployment. And I'll also use an example to illustrate what is the differences between different architecture style. So before we jump into that, I just want to get a quick idea. And anyone has done microservices application development before? And how long have you been doing that? More than a year? Or less than a year? More than a year? All right, good. So this talk, basically, I'm generally focusing on the beginner levels. So we'll cover some of the high-level overview. And also, I would like to ask, if anyone here is looking at migrating existing app into the microservices architectures and looking at integrating with existing enterprise systems, why do we need to move into microservices? I think everyone knows that we are now living in a very rapidly changing and competitive environment. So someone told me that the only thing that doesn't change is change itself. So we need to provide an environment for our application developers so that they can do faster innovations. And we have to grab the market opportunity whenever there's new opportunity available. And at the same time, we also have to integrate with our existing systems or database or the data so that we can provide the exposing the data securely to either line of business or our partners or customers. And the key thing is we need to deliver new services faster and more reliable. So I just want to share one experience in the past when I worked for the startup company many years ago. My boss is a good boss. And he generally, a lot of times, he has a lot of idea, good idea. And he always asks me, can we implement these features or not? And can we implement these ideas or not? And I look at that and say, yeah, why not? We can do it. And I always ask the question, when do we need that? And he always told me that yesterday. So we are living in a very tight deadline today. So we really have to provide environment for our developers to deliver software faster and reliable. So I talk about architectures now. So I believe most of us are familiar with these monolithic architectures. We used to do this for the past decades or more than 15 years. And in this model, application-generated design in a three-tier architecture with a monolithic design. So every function and every model is actually bundled together for the application. And this model provides a very simple way for us to develop software, simple to test, and easier to deploy as well. But one of the key things or challenges in these architectures is when we start growing our applications, it became harder for us to extend or enhance a particular module or particular features. And because modules can be extensively dependent on each other, and the code become difficult to refactor, especially when you go beyond millions lines of code in a monolithic architecture. It also requires a long-term commitment in a particular technology stack. If you're using Java to design these architectures, you probably have to stick with the Java technology for 10 years. It depends on your application lifecycle. So sometimes it becomes very difficult to debug when it grows beyond certain lines of codes. So even a potential bug in a specific module that leads to the memory leak can bring down the whole cluster or bring down your whole systems. And when we want to scale in these architectures, we generally took the whole structure, or the whole waterfall if you are in the Java world, and then duplicate those across multiple instances. So this is what we have been doing in the monolithic architectures. And then later on, this service-oriented architectures came out. And I think the idea of service-oriented architectures is very promising, but the key thing is it became complex to implement in an enterprise world. So service-oriented architectures, basically you have a multiple delivery channel that you need to support. And in the middle, you have a lot of core services that you want to support for business functionalities. And there's a lot of data sources that you need to provision to your client as well. And as I mentioned just now, it become complex over the years when you come to SOA. For SOA, we try to expose the software as XML web services. And we use a lot of things called WS standards. I'm not sure if you are in the service SOA world, you're probably familiar with all those WS terms, WS addressing, WS policy, WS security, all those different kind of standards. And it become more and more complex, or some even proprietary technology in the SOA world. And it doesn't really help us to deliver software faster or more reliable. And in fact, in the worst case, and sometimes in the worst case, it even make it slower or prone to the errors when coming to SOA design. So today we come into this concept about microservices. Some people view this as very, very similar to, they think that microservices are very similar to SOA. I think that's a fine statement. And to me, I think that microservices basically make making the SOA easier and without the burden of all those WS specifications or the enterprise service boss. And one thing about microservices is it's not just about technical changes. It's also involving changes in your organization structures. So in the past when you have a, especially in the enterprise context, if you design a monolithic architecture with three tier architectures, you probably have a design team just working on the only responsible for the front end. You have an application team just develop the application logic. And you have a database team just working on the database layer. But when you come into microservices, your team basically responsible for the whole thing. And every microservices is being developed, managed, and deployed independently. And we use a lot of this HTTP or REST based API. And the key concept about in REST is about resource. So we typically represent a business object such as your customers or products. And we use a lot of, we tried CRUD model, create, read, update, and delete, and use that together with the standard HTTP, HTTP words. Such as, for example, you use post HTTP post to create a new resources. You use a put to update the resource. You use the get HTTP get to read the resources and use HTTP delete to remove the resources. So this is the thing that we do in some of the concepts in the microservices. So when you come into the microservices patterns, every services tends to have their own database schema. And for example, here, you can see S1 basically use DB1. S2 basically is DB2. And S3 basically using DB3. Every services itself has their own database schema. And we try to decentralize the data soul in a microservices design. And these will have lower impact whenever there's a schema changes to the particular services. And this model is very different from the enterprise data modeling perspective. So in microservices, it seems that we are duplicating some data across services. And the benefit of this is we try to do a share nothing architectures. If you try to share your database across many services and if there's a change in the database schema, it will give you a lot of impact for the rest of the other services as well. So my recommendation when coming to microservices design is try to design your microservices with their own individual database schema. And in terms of deployment, microservices can be, every services can be deployed in a single VM. And of course, you can also deploy multiple VM, multiple services on a single VM. But my recommendation is also one services, one VM. That make you easier to deploy or scale. And in the microservices world, you're probably familiar with all the service discovery. Because every services now in the microservices world, service A, service B, and service C, they need to talk to each other. And we need a service discovery model. So we can use a client-based service discovery or a server-based service discovery. And I think if you guys are familiar with the Netflix open-source projects, I think they provide a lot of open-source tooling that help you to design microservices, including service discovery tooling. And from the communication, every services need to talk to each other. In microservices world, we generally see about messaging, message queue, or using HTTP and REST API. So microservices do give us a lot of benefit, but it also comes with some challenges. So, yes, in microservices, it gives us a higher level of modularity. Every individual model can be developed individually and communicate over a standard interface. And you have a very, you can, every microservices is isolated by itself. It has a better separation of concern. And every services has a well-defined boundary of their functionality, as well as a well-defined API contract. And as I mentioned earlier, we can deploy microservices independently. And once we have the ability to deploy services independently, it actually lessens the dependency between your development teams. Because every development team, if you keep it small, you can deploy, you can develop those, that team can focus on their own services and develop more quickly and making the CI-CD process easier. And so it's also, from the engineering standpoint, it's making us easier to innovate and easier to align. Sometimes it allows us easier to align the services to a specific hardware profile, depending on what kind of services you are building. For example, if you're building a CPU-intensive services or a memory-intensive services or IO-intensive services, those can be built independently. Those can be built independently and deployed independently on a specific hardware profile. But microservices, on the other hand, microservices do have some challenges, especially when you come new into microservices design, especially in the disputed architectures. The way that we handle transactions is totally different in disputed architectures. And even in disputed architectures, we have a lot of challenges in terms of network latencies or the network failure. And when you come into testing or debugging, it can be more challenging as well. And if you are doing logging, make sure that you have a correlation ID between service call, because we are calling microservices everywhere. Imagine that, let's say you have tons of hundreds of microservices. If every services is calling each other, if you want to debug that as a whole system, making sure that every request call has a correlation ID so that it can trace the request between services and services. Another thing about microservices design is you need to think about the API compatibility. When you have different versions of microservices around, it makes sure that the client can still use the older API when you deploy the new version of the services. So just as I mentioned about in the disputed architectures, the doing transaction is very different. So if you want to do a transaction in disputed environment, you probably can move the transaction logic into the client side, or you can do some disputed logging services. But I would like to encourage you to think about a different design, which is what we call the eventually consistent, the CAP theorem. So in the CAP theorem, do you guys familiar with the CAP theorems? Right? Okay, so I don't have to spend too much. So in disputed architectures, we can only pick two of them, right? So if there's a network partition, we can only choose either consistencies or availability, right? So when it comes to microservices design, we tend to move towards eventually consistent. So we want to make sure that both sides, both clusters is available. If you have two clusters at two sides, for example, you want to make sure that we want to achieve the highest availability as much as possible. And we want to do a higher availability in the disputed architectures. If there's a partitioning, we have to, we can only match, I mean, because we still want to make sure that both sides can still do updates or reading the data. So, but if you have partitionings in between, you cannot match that both sides are consistent at the same time. So, but when the partition, when you recover from the failure, eventually they'll become consistent. So that's why we call a new design in eventually consistent model. So in microservices or design in a disputed architectures, some people might have a challenge when moving into eventually consistent model, depending on how you structure the data. So next one, I want to talk about the OpenStack Services and API. So immutable infrastructures. Have you guys heard about immutable infrastructures? The concept about immutable infrastructures? I think when we come into a cloud-based or cloud-native application design, we have to look at these immutable infrastructures. Immutable, according to definition, is unchanging over time or unable to change. And if you think about in Java programming, a string is immutable, right? I mean, a string in Java is immutable, which means that one is created, the values cannot be changed. And if you want to change it, a new string is created and a new reference is being updated. So we have to think the same way in cloud-based infrastructures. And OpenStack do allow us to create immutable infrastructures. So by using immutable infrastructures... Let's go through about this. So if you are not using immutable infrastructures, basically every time when there's a package updates or changes in a config or application updates, you basically have to update the VM, every integer VM. And you have to maintain all these leads of VM and make sure they are always staying at the state that you want it to be. But we're not doing that in the microservices. In the microservices design, we want to make the infrastructure immutable. So when you first have version one of the services, we call it v1, you deploy it on the VM. If you come and have a new version, we create another VM and deploy the version two. And then we bring down the previous versions and update the link to the second versions. So by using OpenStack Cloud, those API allow us to create immutable infrastructures. And it allows us to simplify the operations and allows us to do continuous deployment with fewer failures. And every time when we create a new instance, we basically test it before we move into production. So that also gave us a battle of confidence that our infrastructure is being tested. That brings me to the next topic about infrastructure as code. So we have to treat our infrastructures same as the code. So any changes to infrastructure is the same as changes in the code. So when you change in the code, you actually bump up a new version. So in the microservices design, same thing. All these things should be modeled as infrastructure as code. Everything should be defined as a code. Anything changes to the infrastructure itself or trigger a new deployment or a new version. Another thing that I want to talk about is the API-driven infrastructures. Have you guys used OpenStack SDK or API in your application design before? Yes? Okay. So in the past, when you want to deploy applications, usually your SysAmin or operators will provision the VM before you can do any deployment software on the VM itself. And if your software application requires some storage, your SysAmin also have to pre-provision those storage before your application can use them. So even if your application requires some computation resources, your SysAmin will provision that VM before you can do anything to the other VM. That's what we do in the past. But today, from OpenStack perspective or a cloud-based infrastructure perspective, there's these API-driven infrastructures and your application code. If you require some storage, we can use the API to call the object storage dynamically in your application code. And you can also use your API to create the VM within the application code itself. And then you can do something on the VM dynamically inside your application code. So in the whole process, it doesn't require any SysAmin to be involved. And that makes you very flexible and easier to scale your infrastructure depending on your use cases. And someone might be asking me that the first VM, you probably still have used the SysAmin provision before you can do anything. But I would like you to think about that. It might not be today. So it is possible that we can deploy even without the first one. I'll leave that as a question for you to think about that. So next, I want to go through an example. I want to use this example called online video platforms or video transcoding. So in these examples, the requirements basically is like users, they have to upload the media files. And once they upload the media files, the system has to transcode the video file into multiple formats for various devices or display formats. And the users is basically spread across different geographical locations. And of course, budget is a constraint. Nobody has unlimited budget. So how can these requirements, if you're given these kind of requirements, and how can we build this application? If we do it in a monolithic architecture, monolithic way, this is usually how we do, right? So you have the web front end, you have the application here, and you have a database and probably have a shared storage for storing those media files. And you'll probably need upload functions. You also need transcoding functions. You also need a playback function generally. And all this module is being developed as a monolithic architecture. And when you want to scale this kind of deployment, you basically scale the whole thing together and using a shared same database or the shared file storage. So when we come into how can we refactor these architectures into the microservices way? So let's try to make it more interactive. So remember, we have three functions. One is upload, and the other one is transcode, and the third one is playback. So how, if you were to up, migrate the previous monolithic architectures into microservices way, how are you going to do that? So first thing, probably the upload functions, right? You can take your upload features out and design that as the microservices, and you probably have to define some sort of API contract, and user can go to the upload UI to the API gateway and you define your upload services with some rest, using the rest API, and that upload services actually can upload the content into the object storage. And today an OpenStack Swift do provide the object storage and allow it support HTTP protocol. You can allow user to upload file into the object storage directly as well if you set up the authentication and policy correctly. So that's the very first part that you can take your monolithic architectures into the first microservices that you want to do. I remember in two years ago, about two years ago, I presented a demo talking about how we architect a WordPress application and push all the web static contents into the object storage and all the static contents will be served directly from the object storage, the Swift object storage, without a web server because the Swift object storage support HTTP protocol. So that is one way that you can think about when coming to refactoring your existing applications into microservices. So moving on, we have done the upload services. Oh, it's the second thing, transcode, exactly. So you probably want to do a transcoding job. So you have to define your transcode services and in the transcode services, probably want to call some transcode walker. The transcode walker basically is the one responsible for doing the transcode task such as using FFNPAC if a family with a transcode world. And once the transcode task is completed, you just update the destination, the transcode file into the destination object storage through the HTTP protocol using the Swift object storage as well. And if you, let's say, for example, your application grows bigger and demands become higher, you probably want to grow your transcode this component without affecting the rest of the services. And one thing you can do is probably set up a resource manager to monitor this transcode walker. If there, let's say, for example, if you're one request required to transcode a media file to multiple, let's say, you have one media source file, but you want to transcode into 10 destination file, you generally have to spin out 10 transcode walker just for that particular task. So we can define resource managers here and to do the auto scaling for the transcode. One reason that I'm not proposing to use, I mean, you can use the, in the open set, you can use a heat auto scaling, but the heat auto scaling, to me, is a more passive auto scaling way. What I mean is, in a heat auto scaling, it will only scale or create additional instances when the CPU reaches a certain threshold, right? So it's more passive way. So if you do the things in this, we're having a separate resource manager to monitor the load of a transcode, you do it a more active way. And one interesting part about this example is in a transcode, because we are doing transcodeing tasks, transcodeing task is very CPU intensive. And it is not ideal for you to set up auto scaling based on the heat template, based on the CPU, because a single transcodeing task can consume 100% of the CPU, 80 or 100% of the CPU. And that doesn't make sense for you to scale additional instances. Let's say if you only have one transcodeing task and you reach 100%, which is totally fine according to your application logic, it doesn't make sense for you to scale another one because there's no other transcodeing task. However, if you use this model, you also can predict your... You can also put in some prediction algorithm to make sure to understand your transcodeing workload in the past, and you can predict that if there are certain transcodeing tasks coming in within a certain period, you can pre-scale your transcodeer before the load gets in. So this is more what I call a more proactive scaling mechanism. And the last thing is playback. So on playback, you also need to define your API, and then you have a Playback UI talking to the API gateway, and then you have a Playback Service that retrieves the transcodeed media, the destination file, and play back to the users. Because in the microservices way, it allows us to customize the UI according to different services. For example, in the online video platform, you probably have a content creator only working on the upload UI. But your playback can be serving to millions of users. So you can scale your Playback UI and creating a different experience without affecting your upload services. So that's how we can refactor media transcodeing application from the monolithic architectures into microservices architectures. So I think we can apply the same logic to other applications as well, and it depends on how you want to use it. But if you look at these new architectures, what do you see here? Remember what I have talked about? The challenges compared to the monolithic architectures. Doing all these things allows us to scale better or allows us to create additional service better. However, this is more complex to do, especially come into deployment. You have to deploy different services, multiple services, and every services needs to talk to each other. And that's why we have this API gateway, or if you're familiar with the Netflix open source, the Netflix do give you a lot of open source software that you can use for the API gateway or the service discovery model. So we have talked about different architectures. We talked about the microservices patterns and the challenges in microservices. We talked about open-stack cloud-based infrastructure. How can we use open-stack cloud-based infrastructures or API-driven infrastructure to support the microservices design? And that's what we conclude today. If you have any questions or feedback, please send an email to me or you can send a tweet to me. I will try to check the tweet these few days. And if you are interested, interesting, you can go to this cloud app. There's a cloud app launch at the marketplace, and you can find me there. And I'll be at the Intel booth at 4 o'clock. So there's a book signing. Remember the e-book that I have mentioned just now? There's a book signing event at 4 o'clock. You can find me over there. I think I'm almost running out of time. So if you have a question, just let me know. Thank you, everyone. I'll remember to read this.