 All right. Hello. Hello. And this is how we test the microphone, right? Yeah, let's go Okay, great great great to have everyone here and then we could just jump right in this last time You can check that so you're in the right place in the right room. It's a service mesh con. We're recording. Let's go Okay, so I recently chat with Danika on the the con cast when we talk about her pet project about the the growing houseplants I have a lot of houseplants and I come up with this I think interesting idea of how I can Explain microservices using house planes go for it. So everything when you start in the building this like microservice Everything starts like a nice entirety. You're putting a lot of attention on it and it's growing. It's the first one And it's going well and you're doing okay So I think I need to add one more microservice or one more houseplant, right? How many times it happened to you or like 40 times? This is where we go in into this one. So we start adding more microservices. They still Relatively small and tidy and everything is good over the time You want to add more functionality in order to make them more resilient. They're growing naturally. They're growing And over the time this functionality actually becomes so powerful that you also losing the track Where the things is going its place become is very dark And like many microservices communicate some of them communicate well some of they not At some point you start even noticing some of the microservices are dying and you don't know what is going on there Some of them may be maybe too much of infrastructure. Maybe microservices become too big, right? Yeah, that looks bad and over time This is how I look every time when I try to explain some of the or some of my customers look when they try to explain They are microservice architecture. They look like a Charlie day from yeah It's always sunny in Philadelphia episode of babysitter. How many you've seen it's always sunny Philadelphia. I don't know if it's yeah Cut the jokes for the yeah, yeah, so you will understand half of joke from the stock. Hopefully YouTube will Appreciate those jokes. Anyway, I Would like to introduce my speaker Danyka fine. She is a senior developer advocate at Confluent and And I am Victor Gamov Developer advocate at Kong and today we're gonna be talking about all things Kafka. Why we are Good people to talk about this Well, I love Kafka. I don't know about you. Yeah, I wrote the book about Kafka. So I think I also Good candidate to talk about this stuff, right? Yeah, I think we're I think we're qualified. Yeah Absolutely, can we do a mic a little bit louder for Danyka and make sure it works I'm much quieter than Victor. So all right. We're good. I think we're good. All right Yeah, so I'm Danyka. I Personally, I mean Victor might look like Charlie when he talks about these things or may not want to look like Charlie I kind of embrace it. I really enjoy that aspect of it, but we're talking about microservices here So let's get a little serious So yeah, you're probably all here because you understand that using microservices or architecting your system in that way Makes sense as opposed to a monolithic sort of architecture, right? It makes sense to break down your application into a smaller smaller pieces that Are more manageable, right? You're more flexible You're you giving the the individual teams and business units Control over the applications that they're building and and only the stuff that they care about only the data that they really care about but by breaking it up into smaller pieces then you kind of face of The inevitable problem of how do we actually have these these services communicate with one another, right? I have a few ideas and there's a plenty of the protocols that we can apply here. Okay. Well, we'll get to it. Yeah, right? so You're probably thinking you know when you start thinking about integrating microservices you're gonna encounter this problem, right? You probably get frustrated There's not really a good way to to integrate them, right? But in reality, you know, there are a lot of options for you But there's just a lot of bad ones, okay? So, you know, there are a ton of options available to you some are better than others some are more intuitive than others So let's jump into those. So first of all file system. How many of you ever? Integrated to services through like a pushing the file in the common space and the ftp and scp file It's good. It's good. No, okay. We're all gonna have to talk later Yeah, we've been there. It's let's call it enterprise software. This is how we were building this for years Yeah, we shouldn't be doing about databases. I mean if you build a system that communicate or pushing information through databases It's okay. It's a good place to to share about this It's no shame in this room so we can talk about this right again We can talk later a little bit more people more people you still more people that I mean databases make sense, right? I mean if you're moving from a monolithic sort of architecture to microservices I mean in a monolith you definitely had at least one database probably more like 12 hanging around, right? So when you're breaking down that monolith and you're looking to integrate your microservices You probably still have a database sitting around and you definitely have at least one engineer who knows how to use that database So why not use the database to communicate the reason why not use that a base my database is great for storing the state for individual service but for Integrating and when the services start coming equal to each other you very quickly end up in situation where you need to have a Committee of Approvals to changes in database. Are you we changing the schemas you need to notify others that you changing the scheme of this database? And very quickly it's not it's not microservice anymore You end up in this like a monolithical service monolithical mess in reality You're back where you started, right? You're you're back with a database that sprawls and you have all your business units using the same thing So maybe we shouldn't use a database. What about RPC? I Mean it can be a rest. It can be any type of remote procedure called Maybe I don't know if you ever heard about the RMI RMI anyone building microservices communicate through our mind It's cool, right? Yeah, or Jrpc. How did these kids call in these days, right? So basically they're my it makes sense, right? This is how we communicate. Yeah request response. It's good It'd be like, you know, I guess kind of back in the day, you know, I pick up the phone and call you right Yeah, but like what were you gonna do if I'm not available to to to answer the phone call? Honestly, I'm persistent. I'm gonna call you 10 more times. Yeah Actually build a service in case of failure uses to continue to to hit this refresh button and hope that it will recover I would rather leave you a voicemail and put some information that required for you to get answer and You will reply whenever you're ready. Yeah, that seems voicemail seems a little antiquated But we'll get to that I think so if we're using remote calls, right? We're gonna end up with with something like this. Okay, we so hope we have our microservices, right? But now instead of you know in the monolithic case, we just had everything packaged neatly in one application, right? It was good. It was cozy, you know But now we're kind of we're making these network calls We're reaching out into the ether hoping that you know someone else will respond and if they don't you know We're leaving 25 voicemails trying to figure it out So now we have all these requests in flight and it's it's kind of a mess now. It's it's tangled again, right? It's tightly coupled. It's hard to keep track of all these things, right? Another problem here would be Like a skating failures. Yeah, if one of the service will fail and you continue to like a hammering another service Nothing good will happen. So question to you My dear friends developers, how you would investigate the problem in the system if there's some of the failures What's your typical things to do? Who said logs? Oh my god, you got to go get it right away Well, I don't have any all right. So will you look at two logs? And you know it this this makes sense. We're all developers here We know how to debug applications, you know tracing and logs. This makes sense. It's a good way to see into systems But what if okay and hear me out What if we just let our microservices communicate using a log? What if we used and this is no secret? What if we use Kafka? Okay, we're we're not here to argue that Kafka could be a good choice for this We both love Kafka obviously so when you're using Kafka to sit as that communication layer between your microservices You're gonna get something like this and it's more loosely coupled, right? This is good now all the microservices are sending information to Kafka and can communicate through that and for those of you How many how many of you are familiar with Kafka? Okay, good for the few people that did not raise to be right So great for those of you who aren't familiar with Kafka It is a distributed event streaming system and that's a very concise, but you know heavy Definition to unpack so very quickly. It's a streaming system, right? So real-time We're going to be communicating information quickly and efficiently across the system and it's an event streaming system Okay, so for those who aren't familiar with events. They are immutable, okay We're communicating immutable facts across our system quickly and efficiently and that is the perfect way to communicate between microservices Similarly, similarly how we talk about the voicemail when you left the voicemail you cannot change it So similarly how you have in conversation with your significant other and you said something that you're not supposed to say There's only thing how we can do this if you sent another event and hopefully the receiving system will process it in order to Change the state, right? Yeah, so this is how you can, you know the explain immutability to to to other people So essentially see something you can change now communication in our systems will become more untangled and Using the Kafka as a log for capturing all the facts around the system We can do multiple interesting things about this for example This voicemail that you left me might not only be received by me Someone else can also listen this event and also make some of the some of the actions on this one same thing with Kafka Some of the message will be consumed by multiple consumer and they may have different reaction on those consumers So in this case, we're building a synchronous communication for for this By the way, this is the one of the patterns that you can find this website called developer.confluent.io How you can build event-driven systems and how you apply Kafka for the things And if you have an urge to tweet about the slides that our Twitter tags are conveniently placed in the in the bottom So please You know don't do not hesitate So we already started talking about Kafka, but like let's talk about some of the technicalities here, right? So how I like to explain that Kafka is a communication layer. It's a data aware System so you're allowed to build a communication. That's data aware you sharing Facts about the systems that other seems to what will understand you sharing the order details and some other systems based on some of the Schemas and some of the information that available for for the message can understand So communication becomes not only like on the packet level But also communication becomes on the data level and that if you interest it in the particular piece of data You will be able to receive it So Let's talk about Kafka as a system that you're trying to deploy how many of you are running Kafka on Kubernetes these days Okay, not many people how many of you running Kafka on Kubernetes in production Nice pretty much the same people people running the Kafka production. Do you enjoy in this? Yes, I knew this and that's why there are some automation exists that allows you to To to run this successfully there is an open-source operator from from stream Z which is also CNCF project There's some proprietary operators from Confluent and some some others, but Idea is to bring some of the system that was designed pre cloud native times Into the cloud native worlds requires some of these skills and those skills usually tied into into human SREs who knows how to run System like Kafka, but also know how to run stateful systems in Kubernetes and There's like in this in this slide. It's like a very Simple explain of architecture of Kafka So what I like to say in this particular case, we're running one distributed system, which is Kafka That runs depends on another distributed systems Which is a zookeeper on top of another distributed systems a system which is a Kubernetes So, you know many things can go wrong, right? Yeah, pretty meta and Things around the zookeeper and Kafka. It's good that many people are familiar with the concept So I don't need to go very deep on this one, but like if people in YouTube watching us They In order to Kafka cluster to work It will depend on the cluster of zookeeper and zookeeper requires to establish the quorum in order your data The metadata that will be stored in zookeeper will be available for Kafka worker to serve Same thing for Kafka. It's also they need to form a quorum in order to form a quorum They need to have us what we call the stateful network identity. So each broker each Member of the cluster will need to have a particular name So pods with your deployment with some random name will not will not will not fly here We need to use something else So we will be using things like stateful sets that provides us stateful network identity Another thing that we need to deal with is the stateful disk identity and the something that when the your port will come up You need to attach the same disk because Kafka is a stateful system. Kafka is about storing your data Data is distributed and partitioned So the good thing that good people in the Kafka community work very tirelessly to simplify this and make Kafka more Cloud native and the things with the KIP 500 KIP is the way how Kafka community communicates keeps Kafka improving proposal It's the some of the some of the thing that you can Google and you can find full description how this work But essentially now we don't need zookeeper and we're gonna be running this our Kafka in zookeeper less mole, right? Major win. Yeah so What else what else is important? So what else you want to know in order to run the Kafka? Yeah, so like as a client as a producer and consumer I need to be aware of the nodes in the cluster, right? So how does how does your solution, you know work with that handle that? So with the with the Kafka when we need to communicate this your application producer consumer or Kafka streams application They need to know at least one URL which is bootstrap server And in order to connect and learn some metadata about the cluster. Remember. I mentioned that Kafka distributes data so all these data that spread across Kafka broker we use this the mechanism that very Very briefly can be described as consistent hashing mechanism. So based on the key of the data our producer or consumer will decide where to read or where to consume or where to produce from where to consume where to produce data and Bootstrap server will give some information about overall brokers I don't need to put all the brokers into this bootstrap server just only one But this only one broker needs to provide me with information about other brokers. Okay, cool. I can deal with that. That sounds good. All right what about Security that's probably gonna come up right because in a monolith We it's built in a very specific way that it's super secure on the outside, right? And then once information is passed that once we've proven that it's not malicious they can do whatever they want, right? At the microservice level, we're still probably within the same network. So should we just Trust that it's okay, right because we're in the same network probably not zero trust is maybe a little better so we should take that sort of mechanism that we're using at the monolith level and Maybe shrink it down to the microservice level, you know trust nothing until it's validated and comes in then but What did the do the individuals so we're breaking up these microservices and now teams own the different business units and applications specific to them Are they responsible for writing all that boilerplate code to make sure that the security happens as it should? Yeah I think it's also another point to to to bring up that when the Kafka was designed There was no again cloud native approach to solving the problem. So the system needs to come with better included So Kafka naturally has all things for you to build a Secure connection. There's ACLs. There's a different way how you can securely connect. However, however This thing actually comes with a price. Oh, yeah Do you are familiar with the concept of zero copy? I've heard of that. Yeah And you know so Kafka is pretty efficient, right? And the way that you know it moves the data directly from the socket to the disk All right, when as soon as you add a lot of encryption in there Little latency adds up, right? Yeah, let it seems up like all The reason why people hate Java because they heard about job garbage collector and usually people think that job research is bad You don't want to something that you know You will rely on on some SLAs and something will be standing in front of your system and like All right, this even though over the years The the encryption and the the mechanism of encrypting the stuff in Kafka in Java become like more and more Convenient, but it's some of the things you cannot change. So when the data arrive over the socket in the encrypted socket It needs to be decrypted and only after it will be flushed into the file Where is we're using plaintext this data from socket will be flushed into the file immediately. So This is something we can work on This is something that we actually can can solve with the with the solution that I will propose in a few seconds All right, come on. All right, and it's I think it's a time, right? Like we've been 15 minutes in presentation and we never said word service mesh on the service mesh con All right, we're good. We're good Still time still time to talk about the service mesh. So the interesting enough service mesh provides the very um Very at least like a service mesh this current generation provides the very nice ways how developers need to think about infrastructure specifically focusing on the Value and focusing on the business logic rather than focusing infrastructure Like many developers knew that how to write a I don't know code for their particular language their particular framework That will does like encryption and all this type of jazz, right? But what if you don't need to what if the infrastructure will give you? Like you said like zero trust promises already and will handle all this automation Kafka SRese have a years of experience of building this type of solution, but these solutions need to be properly automated it needs to be Like like one of the characters from of the movie that I really like dark knight once said When you're doing something good never do it for free So that's why the Kafka series would be expensive or the tools that do automation for you would be expensive, right? Like tweet at me if you get the reference who say that So we're going to be talking about like very small use case very small use case that explains how the things works And the small it's not because like I cannot do bigger The small just to explain the lots of things going on here So we have our producer that will be I don't know some system that will be capturing orders from the web And after that published into into Kafka Another we have a consumer that also need to read this and the those two things need to figure out how to find a Kafka brokers And a Kafka will be connected to to some service We can have things like Kafka connect into mix, but like once we figure out this like a smaller use case This is where we're going to be This is where we're going to switch in places and I will be showing some of the stuff And you need to feel this the awkward silence for a few seconds because I will be switching to different display mode I will fill the awkward silence not everyone hates Java right here We're all good Okay, I'm just checking It was not that long and it's not that awkward. Okay, so Again idea of bringing this thing together will be tied into couple things So I'm running this small Kafka cluster with the three nodes and one of these nodes will will take the responsibility of a leader And because we're not running this zookeeper. So it's nice nice and cool. We have some consumer and producer So how this happened to be? And this is where we're going to look into a Kafka configuration And So we deploy I deploy this with with simple stateful set There's nothing really specific here. So you can use any you can use custom image. You can use image that Comes from the vendor some open source images really doesn't matter. So to show you there's no like other magic So the stateful set that will be responsible here is that We'll have this nasty Script that built in the YAML. I personally hate this. That's why I'm more like Operators approach, but in order to explain the bits. I need to go like, you know dirty. So Couple things that I need to build here and this is the Probably the most important thing that everyone will be care about. So in order to connect Clients to Kafka broker. We need to provide so called advertised listeners. This is something that bootstrap server will need to provide to clients In order these clients to connect. So essentially think about this. It's kind of like a it's a virtual It needs to be some virtual IP. It can be virtual IP It can be like external network if you're exposing this to outside world through the load balancer It's going to be a load balancer URL in this particular case. We have this like a cryptic cryptic conventional URL One of the things that we will be relying here is the service discovery that comes from from service mesh We're going to be deploying this in the service mesh with kuma and This part will be provided the dot mesh discovery will be provided for kuma But there's a couple things this is basically conventional. So it's nothing to do with the kuma here in this case We want this We want this cluster to be exposed and available through this particular URL and the way how we will do In this with with the kuma Is With the thing called virtual outbound. We have a two virtual unbounds first one would be with With the bootstrap And this is very like a similar very simple thing So we want to have access to our cluster through simple URL cluster dot kafka dot mesh Something like that and this is the virtual outbound will provide us. It's essentially Will create the the service that will create Addressable thing and I can customize this name on the virtual outbound and I not need to touch my kafka deployment for second thing in order to like once the initial communication happened like bootstrap server already Provided this the connection other services needs to be available So advertised listener, this is something that this bootstrap server returns the metadata to kafka client and information to the to those Kafka brokers would look like a cluster one dash zero cluster one dash one cluster one dash two That's going to be kind of like a name of the broker and this information also needs to be available to to outside of these Um Inside this inside the smash so for each Number in this in this list. We're going to return a particular Particle url. So for our clients for our application one only thing what we need to do is here is provide this one Our bootstrap server will return through the through the virtual bound will return correct addresses for our system And we will have a connection here with the with this With this ui that comes with with kuma I will be able to see that right now I do have a some data planes that deployed in in my In my cluster some of the things that's related to kafka broker. So this is broker one which is zero one and two zero one and two And we have a few few applications consumer and producer. So consumer producer runs at the mesh. They can use benefit of Discovery of the snows now the next thing that we also a little bit touch this a little bit about What else the service mesh can use us immediately will gives us ability to Get to the logs and this is where simply by creating a the Traffic logs policy that enables me to capture information from all From all data planes. I will be able to see for example, let me do this a little bigger So I can go inside my log for consumer and see what is going on in in the consumer So right now my consumer just reads the data from Reads the data from the the topic and just speeds out this result back to back to topic So we can see all this kind of information and all this done just by simply enabling thing I don't need to put any special things on my brokers since the brokers already running inside the match I will have these things out of the box so This is where we're gonna switch to do we have a like a final slide? Yes, we do have a final slide So that was pretty impressive, right? Like those difficult things that we we brought up around the discoverability and observability and and being able to capture logs properly These are all very hard things when you're building out a service mesh, but but we didn't touch like many things that happened here Thank you. Thank you We didn't we didn't touch some of the things that really bothers me in in in general But in specifics to to Kubernetes world Specifically that we need to rely on the sidecar and we still don't have like a life cycle for sidecars So the Kafka will will will be relying on like if we need to do graceful shutdown Kafka needs to send some of the information to the broker to saying hey, I'm gracefully shutting down because because of what not what is happening there And in order to do that I need to do some some sort of hacks like like a pre shutdown hooks and writing some shell scripts Which is kind of possible to do and many of us already familiar with this, but it's kind of nice. It feels weird So hopefully this will be resolved at some point where we'll have a like a sidecar um Or a sidecar life cycle We didn't touch much of the aspect of the looking a little deeper inside the traffic. For example, envoy has A filter that is available to look deeper inside the Kafka traffic So we can for example collect information without the clients without You know clients reporting to to to this another benefit that we can have with the service mesh and with the envoy filter Is maybe kind of like a upgrading is a protocol. So we can Sometimes it's also might cause some of the some of the issues where you're running different protocols for your application For your client and for your for your Kafka broker. So Kafka broker needs to do some extra stuff in order to either do Upgrade of the protocol. So those type of things also important and in more importantly why I like personally the operator approach in in this case The operator approach can can programmatically work with some of the things around Kafka. For example, you don't want to restart your Kafka cluster if for example, you're Something helpful of your controller like you don't want to start your controller like arbitrary You want to restart your controller last in this case? You will not waste the time on the automation. But anyway, so I would like to you know Invite you to check this out some of the resources that we prepared after the this presentation. It's kind of like our What is the called? You just pick one, you know, we didn't put any labels. So you got to go to all the resources It'll be great. Yeah, we love so Yes with this This is danika fine from confluent. I'm victor gamma from kong and as always have a nice day Questions, um, if anyone have questions walk in the middle. There is a microphone Oh, everyone's put on the spot. There's so many so many questions so many questions Anyone Will also take complaints if not I'll ask a question when you show the demo was kafka running in the service match with the cycle Yes, yes, so the the kafka kafka brokers runs inside and the sidecar provides us Oh, I didn't show mtls. So the the brokers communicate through the mtls and Yeah, so clients and the server. They don't know that they're actually talking to each other through a secure channel That's that's that's pretty huge. And uh, yeah, because that's my next question. Yeah mutual tier is great Okay, does anyone have any question? If not, uh, we're gonna wrap up and we'll start in five minutes with christian poster welcome So just give us a few minutes to set up or don't leave Please thanks everybody