 Terima kasih kerana melakukan ini. Terima kasih kerana mempunyai kami di sebuah komuniti open source. Terima kasih kerana mempunyai penonton. Tak peduli di mana anda. Mereka berdua dalam bilik. Mereka berdua adalah kamu. Jadi, masa yang tak berpunyai. Jadi, saya rasa ini berlainan untuk sebuah teknologi yang tak berpunyai di depan. Jadi, mari kita mulakan. Kita akan bercakap tentang micro-services. Kita akan bercakap tentang bukan bagaimana anda patut berusaha micro-services, teknologi open source dalam beberapa minit. Tapi, kita perlu bercakap tentang konsep micro-services. Kenapa ia penting? Kenapa kita hanya membuat micro-services seperti ini? Mungkin ia akan membawa kemungkinan kompleksiti dengan aplikasi perniagaan. Sekarang, apabila anda memperkenalkannya dengan framework micro-services, bolehkah anda membuatnya lebih mudah untuk memakai oleh pelanggan perniagaan? Alah, ini apa yang Istio sepatutnya buat, kan? Jadi, kita akan bercakap tentang semua implementasi yang berlainan dari Istio di sebuah landscape kompetitif. Red Hat OpenShift Service Mesh tidak sepatutnya di dalam landscape itu. Kita mempunyai sebuah tepologi, apabila kita memperkenalkannya dengan sebuah tepologi. Satu adalah perniagaan data, lainnya adalah perniagaan kontrol. Kenapa kita membuat itu? Kenapa kita mahu memperkenalkannya dengan dua tepologi? Apa yang berlainan di Istio? Dan pentingnya, di perniagaan kontrol, kerana ia adalah sebuah tepologi yang berlainan. Akhirnya, perniagaan terbaik untuk memperkenalkan micro-services. Maksudnya, memperkenalkan perniagaan tersebut, dan perniagaan tersebut di East-West. Apa yang saya bercakap tentang? Perniagaan di map Singapura, perniagaan tersebut di Central Expressway, dan perniagaan tersebut di East-West, dan perniagaan tersebut di Pan-Island Expressway. Jadi, apa yang berlainan di sebuah tepologi micro-services? Apa yang berlainan di sebuah tepologi? Mereka mempunyai sebuah tepologi setelah gäbap jomit mengingkatsnasian terbaik sebuahKebaan. Sebuah uneasy alsit St 않았, st pendapat jelaman. Mempunyai productivity. atau mempunyai beberapa modus konsumsi untuk dibandatkan atau bersama-sama. Biasanya micro-service tak patut dihubungi dalam sebuah cara. Kamu sepatutnya mempunyai cerita yang betul sebelum kamu mulai membuat panggilan kepada micro-service saya. Jadi, ia bercakap tentang konektiviti. Ia adalah sebuah tanda yang sangat menarik. Konektiviti tisu, atau sebuah konektiviti yang terkenal di antara micro-service dalam sebuah perasaan. Konektiviti tisu sepatutnya mempunyai konektiviti yang betul sebelum kamu mulai mempunyai konektiviti. Pada masa yang sama, ia mempunyai konektiviti tisu, atau sebuah konektiviti tisu, keadaan, keadaan, keadaan, keadaan, keadaan. Ia sangat penting membuat panggilan untuk membuat panggilan yang terkenal sebagai panggilan yang telah digunakan. Mungkin ada seseorang yang mempunyai panggilan yang telah digunakan. Konektiviti micro-service adalah panggilan yang telah digunakan, semasa anda cuba membuat semua hal di dalam aplikasi. Ia membuat panggilan tentang konektiviti tisu, keadaan, keadaan, keadaan, keadaan, keadaan, keadaan, keadaan, keadaan. Ia sangat penting juga. It also enhances the capabilities of the underlying container platform. That's right. In Red Hat, what we do with Istio is we built a product called OpenShift Service Mesh. Underlying it is our productised, containerised environment called Kubernetes. It's productised and therefore it's called OpenShift Container Platform. So the four various use cases, typically they are adoption models for Istio consumption. For Istio consumption would be connect, control, security and observability. What does that keep coming up? Alright, let's try this one. There you go, much better. That would be the four usual concerns whenever you deploy a microservice, whenever you deploy a web app. So let's go back to the traditional application server. Back then when there was no microservice, there was no concept of a microservice mesh. You could just run your app on the application server. The world was simple, right? Well it wasn't, it was just complex. But we baked everything into a complex stack. So we call this the application stack. The danger here is that you have the overly complex monolith app. Thus eventually it came to the point where the microservice industry evolved into open standard based frameworks. The likes you may have heard of Vertex, Quarkus, Spring Boot. And now might be even talking about a mesh because there are way too many services out there in your operating environment that you want to safeguard access to at the same time having some visibility into the traffic, the data packets, which is why it makes sense in this operating environment full of microservices just wrap a mesh around it, right? That's probably what your microservice through microservice interaction looks like today. We call this traffic east-west, Pen Island Expressway, folks that live in Singapore. So when microservices are interacting with other microservices, that happens a lot. Now this visus north-south, I will have probably a diagram later on where it's incoming traffic through an ingress proxy. I'm using terms familiar to the Kubernetes world. As you know, Kubernetes is one of the hottest open source movement right now. And yes, OpenShift Service Mesh is baked on top of that. And we also have a tool which I'll introduce shortly in the API management front that safeguards and secures north-south traffic access. So all these are fallacies. All these are fallacies, right? Remember that the chart that was flipped towards you 45 seconds ago, the application server full stack? It's a fallacy. It's a fallacy. You cannot rely on the network to do the heavy lifting off, forwarding messages through all the microservices. You cannot take it for granted that the backend network topology does not ever change. You cannot even think that transport cost of your UDP or TCP packets are zero cost. At the same time, you cannot assume that the network is homogeneous, that one subnet is the same as another. The underlying infrastructure will change, which is why there must be some layer of traffic management on top of the underlying TCP IP infrastructure. That has been in place, we take it for granted, that has been in place for decades now. It's changing, it's always changing underneath. It does not have an SLA tied to it, which allows you to safeguard access to certain microservices that your stakeholders demand, that those service level agreements pertaining to microservices will be assured day after day, hour after hour. You can't just delegate it to the internet. You need to have a service mesh layer. So what did Red Hat think of? They think about number one, what is the hottest service mesh technology out there? Istio.io. Go to the website right now after this. At the same time, what is the hottest container platform out there? Kubernetes. So you take the two, meet in the middle, you have OpenShift Service Mesh. It's basically a containerized service mesh on top of a container platform. So how do you deal with complexity? Once we containerize your microservice, which should be the way because you could always, if you are a DevOps fan, be rolling out, generating new versions of microservices that rolls out onto a container platform in a containerized format. At the same time, because Istio, the service mesh technology is also containerized. It makes perfect sense to share one container, container platform. And of course, this brings to mind the hybrid cloud model that all microservices need to be containerized. Can we just run it on physical? Can we run it on a different cloud? The answer is definitely, of course. Yes. Thanks to the concept of microservices framework. It doesn't care what microservice A doesn't care what microservice B is running on. That's why there is a framework that allows it to connect. Do they even care what service mesh is governing it? No, they probably, it will be oblivious to them. Which is why Istio is helpful because it is an evolving project. There's always new contributions on the weekly. I dare not say daily, but it seems to be the case daily basis. That makes Istio stronger and more enhanced as time goes by. With all these features that I'm going to start talking about in greater detail. For instance, configuration. One of the key things Istio could do is you could actually help config multiple services. The way that your stakeholders want those services to be consumed. So say for instance, now you have Spring Cloud Config Server. We're not shy to talk about other technologies apart from Red Hat because we don't live on the island. It's definitely a huge ecosystem and Spring Cloud Config Server is popular in configuring multiple Spring Boot microservices. You could use that at the same time whenever you want to deploy a microservice project. You could be using Netflix Eureka and Ribbon. Netflix OSS was one of the first out in the market with a concept of a service mesh and different kinds of service mesh patterns. Right now, we believe there are many, many players in this industry. At the same time also, you could see service discovery is what Eureka and Ribbon does very, very well. Every time you're back to DevOps again, start deploying apps on a very repetitive basis. Who knows? Every few hours in a day, a new app is born and is deployed into an operating environment. It's discovered using Eureka and Ribbon. Azul, which you may have heard of if you have been a Netflix OSS adopter, I've seen the complexity in all the various routing permutations between microservices. You will need Azul Server to help you. Makes sense of that. Circuit breaking is what Azul does too. It plays a big part in this. Whenever there is an SLA that says particular microservice is down, you need to start failing over circuit breaking, kicks in and plays a part here. Another reason you use circuit breaking would be because of security. You know that there has been an attack, DDoS attack, malware attack on a particular microservice. Time to circuit break. What circuit break means is that it cuts the transmission channel to that particular service that is impacted. Zipkin who's been using Zipkin. Tracing which now has gave birth to a community that concept has gave birth or their need for tracing has gave birth to a community called Open Tracing. And we have technologies in Red Hat that does precisely that. Yeager we use an open source community project called Yeager which is one half of Yeager Meister. Always a good adult beverage. Don't mind saying that. Haven't had one for a while. So Yeager has been baked into quite a lot of open source products including our own. So in OpenShift Service Mesh it is really that go to technology that is already in OpenShift Service Mesh that allows you to look at the traffic data that goes into any portion of the operating environment where microservice that has been that is currently being governed by OpenShift Service Mesh. What is the traffic like? That's what Yeager could do and that's another technology obviously is Zipkin which I mentioned very briefly. That as you know it's very complex very complex in microservice where we think it's good to standardize as much as you can. We standardize on the frameworks of microservices you standardize on the format of operating in microservice containers, containers, containers that's I'm not the first person to use all three words which is the same word in repetition but it's hard and that is the way you standardize the format of application being developed when it's being tested when it's being developed and when it's finally deployed. Very importantly as well the platform the container platform the container platform the container platform when you have the containers you need to have a platform to govern it to host it to make it highly accessible so what we do in Red Hat OpenShift is that we provide a whole lot of developer tooling not just runtime tooling here we're talking about CICD pipelines config management which I mentioned earlier on and very importantly as well how is Istio enjoy the good benefits of OpenShift again containerization delivery just also not just from the containerization technologies but security technologies the config management technologies the automation technology that you will automate the way microservices are deployed you could automate the way Istio is reconfigured and you might have to reconfigure every time there is a new operating environment that new stakeholder would need or there have been changes in the scope of work alright so in a number of the population microservices just went up 25% and you need to secure it and this is how you will start securing it from this day on different from the other days again back to Yeager tracing, circuit breaking routing service config now instead of getting all these different open source technologies like spring boot config server spring cloud config server all the Netflix OSS goodies, Zool ribbon histricks that's for circuit breaking yes, definitely Zipkin as well and we now have the equivalent all in one product so to speak and thanks to Istio thanks to Yeager all baked into OpenShift service mesh so lots of vendors are providing this and here's a brief view of the landscape note that Istio has a huge contributor in the form of Red Hat that's right Red Hat is a big contributor Istio project so what is Istio really all about is control command and control command and control using a control plane and the bottom layer so think about two tiers it's probably going to be at least one more architecture diagram that shows these two tiers the upper tier the control plane controls it has the ability for you to issue commands issue new configuration changes and therefore control the second tier what's the second tier the data plane we're at the top right under Istio we call this the Envoy project you probably have heard of Envoy just like what an emissary or ambassador is supposed to do that's the term Envoy it's an ambassador or an ambassador like proxy to an existing microservice every single microservice once Istio is installed has a proxy that listens to it how else would you get all the good juicy yager i mean tracing data for yager right so you need to have a proxy that issues the configuration change request as what the Istio command and control team or the administrator of Istio at the control plane layer would be issuing to the data plane the Envoy proxies this to the existing microservices right of course there are options in the industry I'm not saying this is specific to Red Hat OpenShift you could switch for instance the data plane we recommend Envoy from Red Hat certain customer implementations who knows in the industry is huge is evolving is so exciting they might want to switch out Envoy for the fastest web server out there in the market nginx and even here in Red Hat we use nginx to build some of our technology remember the north-south traffic who rather than who what do you think manages that we believe a API management platform should be should do the trick it should be the right candidate to manage that and you know what 3 scale API management which is from Red Hat is baked with nginx ngin underneath so this is the architecture you can get it from Istio.io and you can see what I just described from the ambassadorial point of view the proxy it sits next to the services and the whole idea here is that it propagates any configuration changes that the control plane team might be issuing and there might be so many that is always being issued to underlying I know I flipped it the Istio team like to flip it they put a control plane underneath should just put the control plane up on top that's what I just said a few minutes ago control plane sits on top of the data plane but it's flipped a little bit to emphasise that we care about the microservice first that's right microservice first that's what the Istio community cares about it ain't just about the control plane it's flipped upside down because the health of the microservice comes first so therefore whenever there's any configuration management change request goes through number one obviously different parts of the control plane here we have different cool sounding names that what's the sound like the naval you know enthusiasts or some kind of sailing enthusiasts pilot galley and mixer and it goes eventually up into the data layer alright so data layer data plane it is a collection of service proxies it is meant for you to implement these policies and communicate directly one to one one envoy entity to one microservice so envoy default service proxy like as it can be always be swapped out but very importantly why envoy this is the rest on the market C++ it is one of the highly performing proxy entities out there because of support for HDB 2 and GRPC makes it very suitable it has very fast communication with the service endpoints now let's talk about the control plane all those cool sounding naval terms or sailing terms right it is the single point of administration whenever an administrator wants to effect changes they don't go straight to any other plane not data plane definitely to control plane right and what it is meant to do is that the service proxies require all these change request set by the control plane to be updated right and very importantly it combines all the isolated stateless sidecar proxies into a single service mesh that's the secret is still very importantly as well we keep talking about the traffic that is collected by the control plane on behalf of individual services when it's aggregated these are known as telemetry data telemetry data okay so individual control plane entities pilot what does it do well it basically configures the underlying data plane it converts the high level routing rules controls the traffic behavior just like this so it has the technology to propagate any of the configuration change request down to the data layer citadel as the term might indicate it's a bastion of sorts so it's meant to save against intrusion so it is the security entity in is still and it is meant to enforce based on microservice identity rather than you know network identities what's a typical network identity IP address hostname no talking about service identities so there would be a set of security policies associated to one service is still affects that enforces that policy gally sounds like a ship so somewhere would be the repository of all your configuration policies somewhere will be the ingestion engine for all the telemetry data that's gally and somewhere you need to insulate in in some sense a buffer right that insulates all the other instill components like citadel and pilot away from the underlying platform guess what it is OCP OpenShift Container Platform and that's gally's job how about mixer sounds like a drink again right so it's a Friday say afternoon and keep talking about dropping all these hints about beverages well mixer is a very important bridge between the data plane and the control plane it's an abstraction layer it's an abstraction entity I won't call it a layer it belongs in the control plane the idea is that all your fares backends envoy which right now we standardize on envoy would have some means of or is devoid actually the back end is devoid of some means of precondition checking example quota checks example how you would retrieve an aggregate telemetry data and prepare it for a reporting framework to consume it the likes of maybe kiali have you heard kiali kia li likes of a kibana dashboard we like kiali we like kibana too but this is probably more on the openshift container platform layer and kiali is more on the service mesh we'll talk about that offline if dashboarding is really interesting to you important thing here is there must be some form of all these precondition checking quota checks data preparation that's what mixer does mixer is really the abstraction interface mixer is also available in three scale api management which I mentioned was the go to solution for safeguarding north south ingress egress and ingress traffic into a operating environment why because you need to have some sort of a abstraction layer a adapter going into the data plane so you can see here at the bottom what I wrote is that the use of adapters there's at least one from the mixer community as a form of an integration bridge and here you can see why a mixer plays a huge role with all that rich api that I talked about logging quota back-end checks authorization yes authorization is what mixer does as well it serves as a bridge from control plane to the data plane so a few version numbers just toss out open shift service mesh currently supports Istio upstream 1.1.x as we speak it is supported on OpenShift container platform 4 alright here very importantly also you have the multi-tenancy highlight of OpenShift service mesh highlight of of course Istio implementations on top of Kubernetes I'm not saying that there is no one out there in the market try to do the same Istio to run on top of a container platform but why would they want to do that why goodness sakes why would they want to go through all that hard work when Red Hat has done it now the reason multi-tenancy at least at the top of the list of priorities of them doing it achieve multi-tenancy imagine now you have multi-tenancy both control plane and data plane well you could be getting for instance your services provider I'm not saying a telecommunication service provider but maybe they should be considering this and you allow say your other corporate clients clientele to get on board say a containerized Istio environment could be OpenShift service mesh and then each of them being a tenant play a role in safeguarding access or affecting configuration changes putting in all these policies on behalf of their own clientele their own customers what do we have we have the makings of an ecosystem right for whichever telecommunications industry maybe financial it becomes an ecosystem an ecosystem where anyone with great ideas want to get on top of say a microservice framework on on top of a say a Kubernetes platform could start deploying it really really quickly well bear I would not want you to bear with me with some of the Kubernetes related terminologies on moving really quick almost to the end almost there north south and east west well what's so cool about API management playing a big role in north south traffic management now think about egress and ingress and you probably have different kinds of architectures in your mind the most popular one is on the left hand side there's traffic coming in and the right hand side there's traffic going out alright that's that's typical north south now the question here is question here is who are we dealing with we're dealing with a humble microservice a single technical operation a business operation baked into a piece of code like spring boot question now you have a million microservices not just one what are we dealing with it sounds like a whole lot of operational interfacing every single microservice even though it's a single business op has a has a exposed interface operational interface does that sound like an API that typically is an API the best form of communication with that sort of a service a microservice is through an API now you have a million of it so that means you have a million entries in your swagger document you've heard of swagger document right it's one of the most popular format for describing services a la apis so now that was just a million definitions or the million definitions on the interface side and how about runtime so in real time runtime you might be depending on popularity of these services looking at times two times three times four amount of traffic going in to the service layer so chances are this is getting more complex than issue time to introduce API management layer so the red hat has a recommendation for that it's called three scale API management so it's concerning north south ingress and egress traffic how about east west pan island pan island expressway simple between each microservice and there's a million of it still at my still at my analogy imagine the number of permutations between all these microservices microservice do make microservice calls so rephrase that microservice do make calls do unknown microservices so you need to be concerned about intra and operating environment traffic that will be east west intra not just inter not just egress ingress so we have the capabilities of that we believe a mix of the service mesh as well as API management platform will do the trick both of which would have the means of observability security right resiliency as well oh did I not mention chaos testing so in Netflix there's one tool they call it chaos monkey what a name that's really cool right so we believe there's also other tools out there that does chaos testing right keep the presentation short but there's a great topic we could talk at length so chaos testing against your targeted microservice backend should be a very important phase of your microservice development life cycle that I said it SDLC wise testing is important test test test and not just any test shake it all up see if any of those service inter service connections or intra service connections break because if they do that's good the next thing is if they break do they survive the failure maybe it's not good enough to say it's good it's break it's broken you you must have a means of surviving a failure so service mesh is supposed to kick in a pattern called circuit breaker ala Netflix OSS right his tricks so you must have a means of kicking equivalent of a circuit breaker in which means there will be a failover target for the broken inter service connectivity channel did it kick in in time did it affect any of the SLAs that put in place on behalf of the stakeholders so talk about stakeholders they are not born equal it depends on depends on simply or what do they want out of the engagement experience do they want higher SLAs do they want low do they just okay with middle of the road so there comes in mind hearing of customers and users remember the telco example I gave great example greater implementation get them all into different tiers gold, platinum, silver, bronze it's something tree skill is familiar with the API management product I just talked about and it's also something that the service mesh world is beginning to get get a hang of which in other words getting different tiering in place ala if you want the highest availability for your a million microservice operating environment not only how much I know money is going to be the next topic how much are you willing to pay for the next thing would be how do you want to do it how would you want to get that 99.9% availability SLA up certain kind of configuration changes you want to make through the mix that I just talked about using pilot and mixer how about security what kind of level of security do you want so bear in mind different tiers solve the issue of having different SLAs put that in place so that's it any questions otherwise feel free to contact me through any of these channels that's it I actually have nothing else and this is actually my last presentation for Red Hat this month so thank you everybody