 my deaf nation friends how are you doing today tonight I don't know wherever you are it could be like me in Europe it's we are ending the afternoon could be the US well you're starting your day could be a pack you're entering your evening whatever hey please say hello on the chat I saw Didi Maddi is already there Sahil it's awesome thank you so much so this is our second show of 2022 I think something like that and I'm really happy we have a really special guest like every time here I will bring him on station a few minutes he's there I can see him waiting don't worry Victor I'm there I can see you so that's it please let me know in the chat where you come from we love to know where you come from and you know what let's get started let's bring my guest on stage and I'm so so happy to have today Victor on stage and there he is hi Victor hello everyone hi Sebastian it's great to have it's great to be here it's it's it's awesome it's my already second stream for today and start early in the in the morning and now it's it's a lunchtime here in the US I'm on east coast okay okay cool when you say your second stream is is this your second stream for the community or for customers or yeah so we did one of the I'm participating in one of the Russian speaking podcasts and we we just did the recording of the episodes where we're talking about some of the job interviews and how to you know how to do the job interview and stuff so yeah it was pretty pretty awesome okay okay okay yeah because for me it's also it's the end of the day for me and to be honest it's my seven hours in a row of streaming I did for customers before but that's my first public streaming that I started since this morning 10 am but still I feel I feel the energy from from your site so oh yeah yeah you're doing pretty good for the seven streams yeah because I love the show I love the show exactly okay so so Victor um it's the same question I ask to all my guest Victor who are you uh take the time you want the stage is yours for whatever time you need just introduce yourself yes thank you for this my name is Victor Gamoff I work as a principal developer of the kit in the cloud connectivity company called Kong and before that I was doing a lot of work around the Kafka community I've been doing developer relations at Confluent and familiar with the the things around like a big data and before that I was always around like like open source companies and open source communities so I've been there for a while last year got it was nominated and I received the Java champion status so I'm super excited about this so it's some some of the recognition around the Java community and the work I would get there so yeah let's be the one with me so but uh you know I won't put any any questions yeah yeah sure sure sure I'm just taking the the the chat because I love the chat so we have O'Realy from Toulouse I know O'Realy really well we have DG Maddy from Austria I don't know how to pronounce your name but from Kansas City Jewy something uh people from Berlin Munich wow uh from India so I love it because this show really every time we got people really all around the world and and and it's awesome um so Victor well I have I have a lot of things to to ask you but um maybe maybe we can start about well you are a developer advocate and I know we are exiting slowly this crazy road we enter in March 2022 but I would like to ask to you how did you experience this this crazy period that we started two years ago uh as a developer advocate where we were used to travel all around the world and suddenly we were locked in into our houses so I asked this question to all the developer advocates that come on to this show I just want to hear what you you think about it how did you experience that yeah uh it is a very good question um in uh before the pandemic hit um in in developer relations community there was always like a floating questions around okay can DevRel be successful where we're not going to be traveling and there you will hear a lot of uh polarizing opinions from the yeah like what would be different from to no we cannot do this because just we need to you know travel and meet our developers wherever they are um and what I feel that um the this like a pandemic and in general in life in in our work life it allows us to be a little bit creative and come up with some of the solution to make our work um you know to work stuff that we do and one of the things that um allow me to do this I spend a little time learning about the things that I always wanted to do more like uh like if the problem or streaming doing understanding the camera work and understanding the process of like uh doing some some of the content with the with the video understanding the life of the youtubers and how they do the stuff and uh because you know there are some people who always in the internet you don't need to travel and they have like millions of views and the their reach is incredible and one of the things that we realized and I think in in general like the the developer community also realized that there's a reach that you can the audiences that you cannot maybe meet in person but you will enable them through your live streams you will enable them and teach them technologies through the some of the like the online work from the virtual conferences which is great because we were able to um um to to to meet the people in uh in different places right so um I definitely suffered from from the not traveling even because I love travel but I didn't suffer this my work didn't suffer from this so I I was experimenting in uh in the format of uh the podcasting I did some live streams and live streams with the coding um I continued to do some of the uh the videos around the stuff like maybe shorter versions and I experimented with different formats like you know stream can be longer videos can be shorter with like a very don't do TikTok yet like and I'm not planning to but I've seen uh many colleagues in the uh in the industry they embrace this short format with the short videos with the kind of like a maybe even Q&A stuff um in the format of that TikTok or like Instagram reels so there was like awesome possibilities how the people use those yeah yeah yeah one thing that I really appreciated was when we moved online uh there was one new factor that was the chat because usually I go to conference I am on stage and I do my talk and after that you have some questions but now online while you are speaking you have a chat and speak and people chat a lot and they start building community between them and chat after chat there are and I really love that that was a new level of community building that I didn't saw before because we couldn't do that when we are face to face but if there's one great thing to to get out of this virtual stuff is is to chat I I don't know if you have an opinion about it that it's really one thing that I always like to to mention because it's really make a difference for me it's um it's great to have this like uh interaction with the audience and it's it feels even like if there's a few people you know um that will be in the chat it it that feels weird because you know all you're doing is just talking to camera and it feels that there's actually someone who listens except you know your uh your family members who you know also stuck with in in house with you um so yeah it's it's um it creates this uh the feedback and at least kind of creates the understanding that yes you understand you will publish recording after someone will watch it but the interactivity part with the chats and with conversation with live people it's always on point yeah I see some of the regulars on my channel like Tony TV and it's great it's great to have you here as well oh Tony TV oh Tony TV is also regular on my channel though that's awesome to have some crossing crossing the streams you know we will come back on the stream later that's nice fun yeah yeah it's nice nice but um maybe you could tell us a bit more about um Kong where the company where you work for now because confident I think most of the people more or less know what it is and we will come back on Kafka and streams I think but maybe you could tell a bit about Kong and what you are doing at Kong maybe yeah so we've with conk we're trying to um enable developers in providing tools so they can build like connected applications and we provide the ways how they can do this through open source solutions so one of the first solutions that in the open source project quite popular open source project called conk gateway that allows kind of extend like a proxy server capabilities by enabling searching pieces of functionality that you either need to develop yourself or like find the the ways how we can solve this um adult will take say your application that you need to expose to outside world you put the conk gateway in front and you can enable different features including like rate limiting your proxying all the things like on the fly so your application will continue sort of traffic um couple years ago we started looking into ways how we can improve the um you know the developer workflow in the microservices world so we um we we start and we open sourced a service mesh called kuma which is a cncf incubator project right now that allows developers to build um you know some other traffic the governance and the traffic shaping capabilities into their application so they don't need to um so when you're breaking down your analytical application you kind of like a moving from the way how your code would be calling each other from one model to another and now you're calling these services over the network how you can provide some of the resiliency capabilities there's a plenty of frameworks in the java world but you know believe me when i'm telling you there's not many uh like a different varieties of the tools available in other languages like things around like circuit breakers things around rate limiting things around some of the retries like we have a tons of libraries in java world and we can back this in the your application but um but someone needs to maintain those and uh we were thinking can we move this into infrastructure layer so that's why like service mesh it comes into play um also we like to support developers in their like development journey as well so we do have also like open source tool called insomnia that allows you to uh interact with your apis rest api jrpc graph ql yeah you know submit requests try to different um try to different uh like type of requests you can like automate some of the things we have a whole suit to work with like open api spec so that's the stuff that um i do we've gone we basically just build everything that allows you build your applications and you know control the underlying uh life of your application like a traffic management and observability all the traffic and all this kind of stuff yeah good good to hear and you mentioned service mesh so how does your solution though kuma if i just as well uh how does this compare to the big one let's say isio for instance so uh does it work together or is something completely different different it's uh so the interesting thing about the modern uh modern stack of the servicemen the thing what we see right now what we like to call it a second generation of the service mesh uh because before that some of the developers they have to be in some of the like agent as a part of their application and what's the difference what's the differentiated second generation of the first generation is that existence of this called like a sidecar a sidecar pattern essentially in front of your application there would be like a small process that will be serving the traffic in and out on your application so this sidecar thing um that's many developers already know and use it's called envoy it's open source project and many implementations including like some of the like gateways use envoy as their engine um compare to engine x for example and the those service meshes like kuma istio some some other they use um they use envoy as their sidecar and sidecar in this case it's like a piece that's of infrastructure that you not manage but something has to be managing in uh in the world of service mesh we call it the control plane so your envoy proxy would be data plane and control plane this is something that your administrator as developer will be interacting so the difference in this too between the kuma and istio is the way how the control plane actually works underlying functionality they both using the envoy um the istio has very certain opinionated ways about the um the objects that we will use to interact with this istio started with um kubernetes in mind and istio always was kind of like a kubernetes centric thing even though things are changing right now kuma came from the slightly different background we were it came from the understanding that the service mesh uh the way how to unifies communication with your applications it also needs to unify how you like to deploy your application so the kuma was designed from the very beginning to be what we call universal mesh you can span it across kubernetes cluster vms bare metal across multiple regions across different cloud providers and this is something that was part of design initially so it was designed around like a building different zones that would be uh communicating to to uh either you have like a one control plane that controls everything or you have like a multiple control planes like a more of the the the federated type of style so uh architecturally they do the same thing so um and more importantly like we do support our customers who use like istio because they still want to serve the traffic that service mesh usually serves the traffic that will be inside your data center and to outside world you expose something like a gateway so we do work uh very nicely with the istio on the con gateway side of things so external traffic can work this so it's all about the details and about like level of uh you know what kind of bits you're putting in the in a management side of things like um like istio has some observability tooling out of the box we rather prefer to rely on the open source tools like grafana and primates and providing some of the experience there because we know that many operations people don't care about the whole zoo of different tools they just want to have like multiple data sources that will be aggregated through primates and they will be displaying this in the in the grafana dashboards for for everything they need um some some other things around traffic management the way how the policies the these definitions that will define how the applications will be communicate are slightly different like i said the istio is like very opinionated and there's sometimes it's um i'm not a huge fan of like uh bashing on the competitors but sometimes the policy will be a little bit confusing and a little bit convoluted especially like i did some videos where i when i was doing um uh integration with the like a gateway istio has its own like gateway that like i said built on top of envoy that you can put the front and serve the traffic in um and the way how you can marry together the gateway object and the service object where you give your application will communicate yeah you need to follow certain yeah yeah exactly virtue exactly um but good thing is that i um the the work that istio done with this gateway actually paved the way the standard um that will come in into the communities world so essentially uh we have like ingress as a standard way to expose the traffic from outside to to outside world from outside world to to our applications and the work that they did around the gateway is is a lot influenced new standard that coming called a gateway api that also will be part of kubernetes future so like will always still with like we work uh like with different communities we have multiple customers who use one another um so yeah that's uh that's that's that's what we do yeah yeah awesome it's great you're and i always love love to talk about service mesh because yeah you know it's something that we at reddit we try to push that to our customers and and it's quite a big mind shift yeah for people to accept that and sometimes we manage to do that sometimes we don't but but slowly yeah yeah and and sometimes it doesn't make sense for some customers so it's not they don't need it maybe they will need it later and but but we plan to see it as i like to say yeah yeah yeah yeah yeah and i think that's the um uh that's the goal of um like a developer relations team because it's not like a one-way street it's something that we need to explain like sometimes you don't need this because you either your old monolithic application and uh it work on some of the you know the controls if you want to do some of the you know things much easier like um i don't know mtls usually it's something that the people are like struggling it's not they it's not this mtls difficult but the rather distribution of certificates and after rotation of the certificate is the automation process there's something something is not it's not um it's not straightforward and different people like to do things in different ways and when you do this manually there's some some error prone approach but with the some of the tools with automation tools that comes with service mesh you can do this but in this case it's rather than doing kind of like a uh lipstick on the oh yeah how they how they say lipstick lipstick on the pig uh solving like a bigger bigger problem by decoupling the application and use the service mesh as a kind of like a means to build a granular control over traffic between the applications great great great great um let's let's switch gears we speak about service mesh is great uh let's speak about streams kafka stuff like that because i know that's a topic that you love um maybe we we could start i don't know how we could start this discussion um it could be i could ask you what is for you even driven architecture maybe is that a good start to to get you started on the on the on the topic and you explain to me what you think about it and we can yes of course um so in um in a you know in an approach traditional approach where the application are developed and designed for any imperative language has this notion of that you have a function that you're calling and you need to pass the setup parameters and the function can return the set of results um this same approach was um moved to like a request response pattern where we abstracted the way uh from you know what do we calling we start like using the htp as a transport and we one service calling another service but in this case it um as they say uh it creates a um kind of like a deeply coupled application even though those are distributed like every time we need to call some external service you have a like coupling and obviously the things that we just discussed the service mesh allows you to kind of decouple this and abstract the way how the services will communicate but um there's another approach that says well i don't need to call directly some other service if some other service will require some information from me i would rather publish this information somewhere um in a form of event or something happened here i will form a fact that happened with my system and i will share this fact with rest of the um the rest of the system and whoever would be interested in this can consume this fact the the some some of the things that happened with my system and after that um they don't have to have direct connection to me so in this case i can follow my like um my own cycle of deploying application i can be down sometime and improve the way so improve the like uh the adding new features faster and things like that so but if with this approach there would be a third party come into play so in this third party it needs to have a current capability so it needs to be first of all um uh persistent so this data should be available every time for everyone who wants to consume it right second thing is that the system requires to be you know also fault tolerant in the highly available so because it will be a central place for exchanging this information between the systems it needs to be there um all the time because that's what we're trying to avoid we're trying to uh be more resilient communicating with the other system um and uh we also not introduce a direct dependency so the system needs to be highly available um and uh it needs to be fast needs to be fast because potentially once it be you know central like or like the uh neural central of your of your data uh data system in the world it needs to be fast and serving so many so many um different consumers and producers so this is how we come into Kafka so Kafka allows us to um store all these events in in a durable fashion Kafka has a built-in replication mechanism so the in the case of failure of certain nodes the data will be available on other nodes uh and the Kafka is very fast like Kafka has interesting capability that distinguish it the distinct it from this from um make it different from the other systems specifically the way how the Kafka delivers messages with the messaging systems we usually have a broker that push the messages into your application but what if so in this case we have another problem how we can make sure that we will be ready to consume this match how we can like uh implement maybe back pressure and say hey are we a little bit overwhelmed can you not you know push it on us uh with the Kafka approach the consumer is always kind of checking if there's something for me uh if i'm still processing i'm not going to check if there's something for me i know that it will be there whenever i need this because for the first things that i mentioned durable highly available and fast system will store this for me and another problem that we are solving is future proof as well which means that if some other service wants to use the data that my service provides uh it doesn't need this service not necessarily needs to call me to get this data it's data already published in the Kafka so it will go and fetch this data and do whatever they want with this data because the data is already there and it will not disappear after consumption of one service so your ability limit comes to the play so this drastically change the way how we think about the system interactions however the um it sometimes it might even uh we will look like a over-engineering however like i said it's a future proof uh design and design that allows scalability so you're planning to use more of this as you're building the system you know when you have only two services that communicate one to another it's easy to maintain those services in the communications but when you have a like a tens of services and different services need to communicate like i said service mesh can help but like someone needs to manage it and there's some of the there's a what i like to say like a service mesh versus Kafka is the service mesh or Kafka is more like a data driven communication so it's it operates on a higher plane of of the application where we not care about individual packets like tcp or htp packets but we care about data we send orders we send the confirmations we send notifications we send whatever we want to send so some of the business oriented data we send on top of the Kafka and the service mesh actually Kafka can be one of the elements of the service mesh like we don't need to like configure security on the Kafka site i know many people also struggling to do this because it's let's face it's not like a most user friendly thing there in the in the configuring Kafka but with the service mesh can you know handle this this step of stuff for for the users and handle communication without explicitly imposing these things into Kafka yeah yeah yeah that's great and i think what often people confuse there when they start discovering Kafka they think about events during stuff which is great but they forgot one important stuff is that uh we are speaking about records that are persisted in Kafka uh which is because even driven systems uh subscribes published systems exist since when i started working 2004 and Kafka they there's a record you can even replace stuff if needed and i think i don't know what you're feeling about it but often i i need to to explain that that that's uh something that is way different than a simple publish and forget system now uh here with Kafka we are publishing something that happened it's persisted you can use it anyone any component for your system can use it whenever they need it and i don't it is just am i wrong or is that also a really important aspect of of Kafka uh yes it is very important aspect and i will explain let me um let me um let me explain why it is important in my opinion so the when we design a simple application when we need to um figure out what is going on in the system we usually look at the application log application log is our source of truth for our small application and usually it allows us to understand what you know when we read this from the beginning when the stand um what was the state of the system in the moment where the you know some of the exceptions was trouble when the standard the system was in a certain uh in the certain state and the the the topic of the state is also it's very important for the for the Kafka industry processing so with um um with these systems we usually we usually rely on on some like a database that will start the the state um and this is where i like to talk about Kafka more like very special type of database um rather than messaging system um if we do like a step back a little quick um and we can say that many people came in the Kafka world they came from the messaging world because they have these expectations that okay so there's a publisher there's a producer there's a consumer so maybe it sounds like very much like a messaging system yeah and um i think the the um this um this what's the word i'm looking for i don't want to say the confusion but at least like this understanding that comes from the confusion that the developers of the Kafka uh they would try to explain the the things using the terms of the of their time so to uh to rephrase the hover stark from the Ironman 2 uh i was limited by uh i was limited by technology my time so they were limited by vocabulary of their time they were trying to build something else that will a real that will capture um some of the features from existing systems messaging systems but also databases and data warehouse systems so that's why they took some of the some of the vocabulary like a topic uh like consumer producer but also they took something from database like partitioning thing and uh where the data will be stored in the partitions there's there's the unit of work that usually distributive database or like in general databases are using in order to explain how the data will be structured stored uh inside so um and database gives us ability to you know i'm doing air quotes to replay as well because you can return to database every time and get the data and you know it will be there so why would not have or like employ the same expectation um impose the same expectation to uh to Kafka as well so but in this case the pattern accessing this data will be slightly different so we cannot do requests response with Kafka so what we need to do we need to move our point in the very beginning of time or like whatever point of time exactly there's a time like a time travel like we take this point and the moving base the point back in time and so we can replay all this event why this can be useful like you know when you store in your state in your in your database uh and you know that when you restart your application your state will be preserved in database so you have this um uh expectation you don't you don't ask in these questions uh from your database you just know this when you need to get this state back you will just ask your database even when you change your application code maybe uh you you introduce some new new new um new functionality that will be dealing with this data same it's exactly the same thing there's only different pattern how this data will be accessed um same thing with the Kafka you you have somewhere that stores um history and you're reconstructing this state from the history by replaying this it is slightly um slightly different from what people used to but not much the creators of the Kafka um they took the ideas that work for years in database system and actually push it out of abstraction into the the hands of the people i'm talking about this log thing because if you were using database and you you know spend some time learning a little bit how these databases are work underlying storage for database is this transactional log it can be right ahead file uh where all these database collect the stuff that you do and also you have a database management system that will be turn this transactional log into something that you as a developer can use tables views third procedures or all these kind of things so only thing that creators of the Kafka they did they actually allow you choose your own adventure and you as a developer you implement in your database query language for yourself it can be standard like a Java consumer producers it can be something a little bit more complex like Kafka streams or something more abstract like a ksql db that allows you to use you know we went the full cycle as you can see yeah i'm talking about like a history and we came from the from the time where we okay so let's replicate how the database are storing the data and now we're replicating streaming database on top of this distributed transaction log so history repeats itself yeah yeah i love it hello i put it in the chat because you wrote a book well your co-author of a kafkine action and we know how dare it is yeah i came i came came came prepared yeah awesome awesome awesome uh honestly i i still need to order it but i will buy it because these are we i i i rather um i rather send you send you copies just like yes send me your oh yeah yeah hopefully we will meet together in a conference and you can make it i would look in for the signed versions for me um i love it i'm just looking at chat here tony is well he was he was mentioning time travel even before you mention it so he knows you very well i love it uh he was asking db db last uh so did database last could you explain a bit and how it compares to serverless i don't know is it something you want to mention um maybe i don't know have you played with k-native uh eventing and kafka for instance or i don't know i'm just trying to connect with serverless uh it's just something that you yeah so the i actually um uh me and my friend james word we did the last year when they the world started open up uh we were like super lucky and hashtag blessed uh to uh to to meet up with awesome community in in london and we did they workshop around um how to use k-native um for you know writing the application application driven um the event driven applications so in the in the short world what i love about k-native is um it is a platform same way as a is a kubernetes so everyone can have their own kubernetes and it's uh uh regardless of your vendor of choice you know it can be open shift and it can be some of the managed solution it can be something like uh you can take these from the from open source and install it using different tools like kubat mean like a cups and other tools uh regardless you have this so but this is give you like a bare minimum gives you kind of like uh your assembly language now we need to have a platform that will be running your applications open shift was doing this like over for years like the um the what was the um what was the right name of the tool that allows me to build like as as a part of the um so i don't need to even build my images always like i just need to point open shift and it will create images for me and like ship it to open source to image as as to why yes yes yes source to image exactly um and idea of increasing developer velocity by like removing all these abstract obstacles in in the front came like very long time ago from from tools like hiroku so you just like pointed to your github repository or just like github repository you do hiroku push and your application all of a sudden is running somewhere this was very compelling and the community wanted to have something very uh very similar to what the source to image does what hiroku did for years in in in a standard incoherent way so that's why they come up with this platform called Keynative that allows you to do things and focus around your application um in a serverless fashion meaning that you don't need to worry about the infrastructure even though uh you personally as a developer don't need to your infrastructure people don't worry about this um one of the things with the Keynative that they come with the obviously serving this is your serverless platform this is where your application will be running but also um community understood importance of building this event-driven application so that's why the Keynative eventing it's the approach that allows to also abstract the way how this application will communicate inside this platform and right now there's a implementation and binding that use Kafka as a transport but this absolutely kind of like a separated from it can be any a transport for for your choice that will you know implement the specification and they use this standard called cloud events that allows to also abstract the way how the system will communicate like remind again reminds me all good old days like um uh soap over gms step by step yeah yeah yeah yeah yeah but i love the fact that Keynative converts any incoming source to a cloud event that yeah already a great thing uh yeah yeah anyway yeah and uh it's it's it's great in my opinion um so the cloud events are great um they are easy to understand so there's no like a very um there's you can put any type of workload there it's really doesn't matter it will create the small tiny envelope that will be based on the implementation details it can be either translated to some headers like in the Kafka world or it can be part of the uh because there's the two implementation for Kafka at least for Java there's one that supports a the there's like also like implementation that supports like encoding this message into Avro which is also very popular format in the Kafka world um but in general yeah yeah other but in general this is what this is what the uh the community is going in terms of like unifying the communication between between the system and uh your transport of um of choice you know it can be whatever whatever you want yep yeah that's great that's great um i just want to mention because it's the second time you ask on the chat and i just i don't want to ignore him as if it's asking about your java champion journey uh so we are switching gears a bit but uh maybe you can explain how you became a java champion i it's a question that we got a lot because i receive a lot of java champions here but maybe you can share your story of it and i think yeah of course yeah of course um so my journey as a java champion become like started like a long time ago like i was when i was a back in russia in 2000 2008 i guess i was listening a i was listening to podcasts like a very influential time so java posse was my um podcast what i was listening there and there was a lot of cool um cool people um dick wall car queen um bless his heart um we will ask him in the in the covet um and i learned a lot of things listening to this podcast that was just like very influential that time and i was like yeah that would be super cool to to go to um to to java one one day and meet some some of those people and it was fantastic so like when i uh came to states i started participating in local uh java user group community i live in new jersey and my local community was around princeton there was java user group community that running by one of my friends um he's also like become a java champion for running this community so the champion program it's not like we we we're fighting each other and someone who is the most strongest or like a more buffed will get the championship um hashtag olympics um it's it's about we champion championing the the java as a ecosystem as a community and we serve in our community and it's just like a cognition that's um you know we're doing a great job so the people who run the local meetups people who contribute to open source projects around java people who contribute to jdk participate in gcp those people are java champions because they are promoting java and telling about things java so i participated with the uh with a lot of meetups and even at some point i even like i took took over running this meetup we did a few um a few events uh with uh in the princeton and um like i started participating in more activities meet with a lot of people at the the java one um i was always in the java in the java community i was working around like open source tools um the time i started working hazel cas was was the the java uh in memory data grid and there was a lot of like influence in uh how it works with um the jcash which is was like specification that was a part of java and we actively participated in in the jcash specification hazel cas the time local supported jcash um from perspective of um as one of the implementations so over time still the working of community i i did a lot of work in the russian speaking community so uh since 2010 i'm running a podcast that is like not 100 focus on java but there's many people who are in a java world so i'm spending a lot of time with the java community in russia as well um even like remotely we did this we did this remote stuff even before it was cool right um and um a lot of lot of work um that um i did for like presenting some of the stuff around the java e like a web socket technology i was like a huge um huge fan when the it was announced as a part of like java e6 and i was doing a lot of like presentations and promoting how to use the web sockets inside java e when the java 8 came into play i i spent some time to to learn the nas horn technology what was the javascript engine that built in in java 8 was a great tool i think right now it will be replaced with something like like a growl um so yeah so there was a lot of a lot of work in the in the community but just like helping people to learn java um obviously kafka use a lot of job all the examples in the book in java so yeah so i just like do things and i was super happy when the the community recognized my work and was accepted as a java champion so it was uh it was super excited like i said the journey was not long but it's it's not about destination it's about journey so and your journey yeah part of this community yeah and i think uh well uh explaining your journey you give also some tips to people that would love to well you give some examples uh getting involved in this and this and yeah yeah there's uh it's uh it's not let's say you don't get java champion for free okay it's not because you're cool because we see you at conferences it's not that yeah the reason that you will become a java champion there's more to it and uh i think you you you mentioned that pretty clearly clearly just yeah just just build the build open source project when i when we have like a lot of java champions who build awesome open source projects that's serving many um many developers and those projects not necessarily needs to be kind of like super influential but at least they need to bring some some value for for some people yeah and um yeah i think of course at this as this as question and he's really happy with the he says that's an inspirational thank you victor and that's what but this is really what i love about this show because i have guests and we just chat and they come with great tips great inspiration stuff and like you you just did right now and and it's awesome uh let me see we have 12 minutes left um usually what i do i don't know if you're used to this show but usually what we do i spend the 10 12 last minutes to to play a game with everyone uh you can stay on the on the stream of course um and we will be playing game today though for five or seven minutes um you probably know the game it's called pacman and i have a have a special version of pacman that hi act one year ago and i will share a link in a few moments and everyone can play the thing is that every time well when you join the game you will create a record on the kafka topic saying i joined a game and every time that you will eat a ghost you will write a a record on a kafka topic saying i wrote i eat this ghost and i have some kafka streams that do some leaderboard stuff based on the number of ghosts that you ate okay so i use that a lot to explain kafka because it's so great because it's uh it's you know it's pacman and people love it uh and digimadi he knows it the return of kafka stream so let me let me just let me just share my screen and i will explain you really quickly what i did here's so i'm i'm using open shift of course um and um you know what i need i need let me just go here to the admin view um i need kafka okay and what better things to use in kafka if you need kafka on kubernetes you probably know maybe you are contributing to it uh you know i was um yeah i was on other side of the fence um so there was like a proprietary thing that uh i worked at the confluence uh we we built like our like a operator uh but yeah streams we use uh we use streamsy in um in our workshop when we talk about key native and uh yeah i found this was very easy to set up i personally i maintain or like i'm trying to maintain help to maintain a open source helm charts uh for uh for kafka in confluent platform so there's a kind of like a the community community driven uh the helm charts but the streamsy does a lot of like cool things it's not only just like for development just streamsy is almost magic for me because i installed the the operator and then i do two clicks and i get when you say okay so i want to i want to have a kafka and you don't say like oh i want to have us the the no no i i just i want to say the persistent word i don't care i want to config and i i want to keep two times and i have a kafka running okay so that's why i do that and and and then of course i have my pacman project uh i have two projects the the front end the game itself that will register when you join as a as a player when you eat a ghost and i have the pacman aggregator which is a kafka stream which just do some aggregation and put that on another topic and i have some leaderboards using uh ssc so server side events to to push that and um i can share the link with everyone okay so i put it here in the chat and everyone can just join the game it will generate a player name for you okay so i will be a doll whatever okay and i can start playing the game okay and remember you only score points when you eat a ghost so make sure you eat ghosts otherwise it won't trigger anything on the on the kafka topic okay so um let me open this yeah yeah you and everyone please join and in a few seconds i will open the leaderboard and we will see if people have joined and and we will have a real time oh i lost okay but i i eat one ghost so i can open the the leaderboard and let me open the leaderboard let me see that and i can see how many people joined so just to oh fall joined okay so don't be shy click let me see how many people are we on the stream i saw we were pretty busy i saw around 40 people on the stream so so four people alan 1924 is leading with seven points okay so people are joining okay so just just come on the stream uh you can see it here i have the leaderboard and it's real time so i am basically computing real time every time that you eat a ghost i know which ghost you have eaten and uh which player you are and i have a pretty simple kafka stream that combines player names and score topics i do some computation in fact it's just adding points but it's just to make the point of kafka streams and i send that to a third stream and i probably create a global table to to to stream that to to a frontend uh but but it's it's pacman and people love pacman so and i can see uh victor really concentrated so what what is your player name uh it says alan alan one uh one zero four two the one for two let me take a look let me just eat one ghost and let me see alan one for two let me see where you are alan one for alan alan one nine two four something like that or one one zero one zero uh four two and game over oh yeah one zero four two oh yeah we're third you have eight points so it's still alan 1924 who is leading please let us know in the chat who who it is okay because uh yeah yeah i see the name is when only assigned that was a choice by us because you know i'm live streaming anyone could join anyone could pick any nickname they want yeah and i want to limit some you know and weird stuff that could happen and yeah people are you have interesting choices in there exactly in their internet identities and stuff that they do in the in the internet yeah yeah so let me see uh i'll give you a grace for six four zero i can't see you here in the leaderboard let me refresh that but anyway it's it's a really funny way um of showing kafka and to be honest i haven't published that yet but i have another version where any move that you make with your pacman i record that as a kafka record and when you're done playing you have the option to replay to see a replay of your party and what i do i just go back i set i reset my offset and i do a replay you know and i set some time out and i get i can get a replay of my my party using kafka a kafka stream streaming inside i'm using quarkus of course uh but it's really funny i i still need to polish that up before i can publish anything about that but that's the idea that everyone can play and like the winner we could be able to to we watch all together his party because just by replaying all the events that he he plays so so that's pretty cool yeah yeah yeah um and what is tony saying need to make this a channel permanent sebi um not sure what you say to me but but what i'm showing you i have to write some doc about that but you can deploy what i just show you on um on the open shift let me see you because open shift maybe not everyone can get an open shift but what you can get for free you just need a redhead developer account you can get a developer open shift sandbox for 30 days no credit card ask okay and you get a complete open shift instance um there are some limitations of course but you can use the managed kafka of for redhead you can deploy anything you want you have k-native there and the operator installed by default and after 30 days you can just renew it so if you need a okay sorry you don't want to play with your your chart or of playing with minikube on your machine and you want a real thing without paying anything whether developer unbox is something that let's um let's take a look in this in an open shift console if you have a cong there in a con sorry in uh in an open shift console let's take a look uh if you can install cong there that's the oh yeah is it my is it an operator or yes yes it should be an operator hub okay okay so um and the cong is also like a redhead open shift certified technology so yeah yeah so probably on the on the sandbox that won't work but here i got another that's my cluster on azure and here i have no limitation we're probably if i go to the operator i search for cong oh look at it i got the cong operator i got two here probably more or less the same one marketplace yeah marketplace probably is just the way how you can you know acquire a license or whatnot but uh that's what you want to yep and i just do install here and yeah so great thing on the operator hub you find anyone including amazing stuff including cong including cong including cong you can see i have installed a lot of things today because i did a lot of talks for for customers today so i talk about uh well aco and where is cong there is cong cong is there yeah cong is getting installed okay and what is what do we have cool things so we can create a cong instance okay to be honest i never played with it but yeah we can now we can move this we can move this for the uh for the next uh for the next time for the next session i will be more than happy to to do we can we can hack around yeah exactly that's what i love as well so i also do um if i may if i may to advertise some of my work i do um i do um live streams every wednesday like every wednesday i'm switching to like a two weeks cadence live stream that's called cong builders and not called youtube channel so if you're interested in in this type of stuff or your listeners and your watchers interesting in this type of stuff and want to learn more or we get that question i'm looking for the channel right now though i can share with king builders yeah it's youtube.com slash cong inc uh cong inc uh oh come on i think i saw uh cong inc uh cong inc if you have the link you can share with me right now in the private chat and uh yep yep i can like a cong inc because i'm i'm not in town but almost that's okay let's see uh for some reason links are not going through oh no no you in the restream you have a private a private chat okay and you can share it to meet there and i will we share it on the public yeah there we go yeah okay awesome i got it here and i put it there okay so please subscribe so every first uh Tuesday no uh i don't remember what you said every week you give a a live live stream about it yes uh it is a live stream uh we call it cong builders because um we're building some stuff there so it's it's a yeah i love it like something something definitely won't work um and uh that's that's that's how we're all right that's how we're all we're building something and learning something yep okay hey it's awesome uh we are already one minute over time i i think we are right because no one is streaming after me i hope so because i'm we are using a common channel from openchip but i think no one is streaming after me but just to be on the the the shore side i will uh close this stream right now victor again it was really great to have you and as you said um yeah i want to play with cong the cong operator maybe we can do another another stream or you can invite me on your stream at one point yes that would be fantastic yes yeah yeah hey thank you so much thank you for everyone on the chat uh we were quite a lot of people today and thank you tony thank you thank you digimani thank you uh uh we've thank you everyone that is there have an awesome evening day depending where you are please stay safe we are getting to the bright side of the world right now but still we have to be cautious um and victor i hopefully we will meet soon in the wheel in a conference yes of course and uh and uh yeah i wish you a really really nice day and again thank you so much thank you everyone let me it was great and uh my name is victor gamov and as always have a nice day yeah thank you awesome and let's meet