 You know what I think we'll do is we'll go ahead and kick this off. We have 14 people in here already. Welcome everybody. If you're in the right spot, we're with application development with serverless and containerization and we're doing an Ask the Expert Experts channel today, panel. The experts here today, you see one on the screen and one is currently trying to get in right now so we're just going to kick this off. Bilgin, Ibram, why don't you go ahead and take a couple minutes and give a little background of who you are and what you do. Yeah, hello everyone. My name is Bilgin Ibram. I work for Red Hat, currently as a product manager, but before that I have been consultant architect for many years working with customers and customer projects, doing primarily what we call mid-west or integration. I am also a committer for some Apache projects such as Apache Camel and I have books around Apache Camel and Kubernetes. Hello, in the last couple of years we moved most of our mid-way to Kubernetes and OpenShift, so I start using also Kubernetes mostly from user point of view, so I'm not so much interested on how you install manage but how you use Kubernetes or you create applications for it. That's me, Brissy. Okay, and you saw flying in like Superman is a rule, Hotsamans here, a rule. You get a couple minutes to tell everybody who you are. Hi everyone, hopefully you can hear me. Give me a quick sign. Yeah, okay cool, that helps. My name is Hotsamans. I'm a principal solution architect in the Netherlands. I used to cover basically the whole region from a middleware perspective, so in rather terms that means OpenShift and everything on top and the last year I've been mainly focused on FSI customers and I've been advising customers and all kinds of things. Eric and I had a couple of summit sessions on pitfalls for the hybrid OpenMultiCloud as well as microservices in general and I have the utmost respect for everybody here and I hope to hear all kinds of awesome questions. So please, hope everybody's safe by the way. Be safe. All right, all right, thanks, Raul. And again, my name is Eric Chabell. I'm a portfolio architect director here at Red Hat. I've been with Red Hat about 11 years, spent three years in the field doing what Raul does as a solution architect and quite a bit of time in the the middleware BU, our business unit where we make all those products that you guys like. For the last almost three years now I've been working at the portfolio level, taking a much broader look. Today my role is just to moderate so hopefully I won't have to answer too many of the questions. You see on the right a chat window there. Be careful that you're selecting the track chat, not the event chat. I'll try to keep an eye on both but if you stay in the track chat then we can gather your questions if you have any. But to kick it off, so from what I heard from Bill June, I was kind of curious what your take is on modern application development. If we look at things like cloud native where everybody's headed, the containers part of this track, we just saw Daniel O talking about some of the the cork of stuff and all the really speedy ways to do things. Really fun and fantastic but when you start getting out there in the real world it's about tying together all the complexity that you already have in place. So people have for quite some time been talking about definitely in the camel space about the microservices and things like that but we have monoliths, we have microservices, we have containers. So give me a little tour to kick us off here of what your vision is as you roll into this space, having your monoliths, dabbling in the microservices as a company and then where do we go from there? Well, okay, that's a big question but if we look at it what's been happening in the last decade, I'd say we had a monolithic application where we've been following primary service oriented architecture with probably something like an ESP as an implementation and the main problem there was that it turned into bottleneck, bottleneck both from technological point of view and also from people point of view because you had services that are leaking logic into the into the middleware so it become hard to scale and update those and then microservices gave us a set of principles and help us how we split monoliths into independent components but early generation of microservices, they didn't have the right tooling so you end up creating microservice that had to do its own configuration management and its own service lookup and lots of other things so your microservice had to do too many things and the way Kubernetes and cloud native development and all other projects around Kubernetes changed that is it kind of gives us the right abstraction to manage multiple microservices and it kind of takes all of the infrastructure responsibilities from your application now you can focus more on writing your business logic in your microservice and get other capabilities either from the platform like Kubernetes it will do for your health check, configuration management, service discovery and things like that and if you need other capabilities you can add them as a site cars etc so you don't have to mess up with your application and I think both microservices ideas and Kubernetes kind of enforce each other and helps to each other's adoption and popularity this way I'll leave for now see what Royal says yeah I was I was waiting for my chance and Eric knows me quite well so he was like the one thing I think that that most of my organization or the organization that I speak to need to think about first is actually not the technology and like Bill already touched upon it right one of the things that we did wrong with SOA is that our organizations weren't ready for it and what I see people struggle with most is not the technology it's the it's the understanding people and process that goes with the technology is a big deal and also I see too much people think okay if I just insert the tool all my problems are solved and what what I what I can can relate to is that obviously occasionally look at like kubernetes in general and open chip in specific self-service is key right and all the weight cues and everything yes the tool can help with that but what it cannot do is if you don't have an established baseline give you a number of how much it helps so if you don't do your due diligence and you understand where you're coming from you might think you're doing great and in practice it's a little bit more difficult than that so I'm completely with Bill again to me in season top left technology will help technology will certainly give you now tools that we never had before but it still does not excuse you to look at your people and your process first and establish a good baseline for you to understand what's the game you're having yeah that's absolutely a good point I agree 100% with that but we've talked quite a bit about with that one um we have a question in the chat so I'm gonna I'm gonna dump it in it's not going a complete right term we already talked a little bit about how you're you know taking these technologies and where you're going with this stuff um but as a regular developer can I or should I use containers even if my company doesn't have them in production and I'll add something to this too at the end so you guys can kick this one off who wants to go first I mean it's not clear here why the uh uh Langdum is reluctant of using containers so maybe he can clarify that uh but containers and the related technologies such as Kubernetes have been around now over five years not sure but uh there are lots of companies using them in production so I don't see any concern from from from that side but in order to use containers it's not just about docker you know there is a you you you need buying from quite a lot of people especially from the off-site and you will need buying from a big part of your company and organization it's not like using a library that just a developer decision but it impacts lots of teams so that you will definitely need support from them I think there is some follow-up clarification but I don't make a decision about what my company uses in production the admins to yeah this is the point so you will definitely need buying from admins to use containers maybe well yeah it's a good point I mean there's two aspects to the story right one is how do I as a lowly developer change a complete organization and that that is an aspect that we can help with it's it's partly due to having the right arguments explaining why containers are better establishing ring reluctance so for me as a regular developer why should I use containers because I want to share with my peers that would already be a gain sure it wouldn't be a gain in production but perhaps by establishing credibility they'll listen better to the arguments that you're making to support your argument that the handover problem we all had calls I mean Billion Eric myself we all had calls and and the work it works on my machine right and all the buzzword bingo that you can associate with that right who's going to pay for fixing that it doesn't work on the other machine right if you leave that behind what developer should I use containers for even if my company doesn't use them in production is actually because we used to have the issue that we had a good desktop where we installed everything onto and then you add the one project leak into the other even if you just use the the the docker container inside your developer laptop or vdi or whatever have you not you already get a benefit that it's a self-contained unit and as soon as you nuke it you still have the binary so if you want to restart later you still can can and if you don't it's not leaking anything into other parts of your system so I would say it's already a benefit just as a singular developer but if you go the right way or at least a more sustainable way then I would highly recommend in using that knowledge and leverage to change your organization bottom up and obviously happy to help right now as you can as you can see you're talking to two very big enterprise orientated you know experienced developers have been out there a while and seen these bigger sets and stuff I'm going to take this and flip it on its head really quick and introduce it in the sense of what I think he's trying to hit also is when you get to a small organization you're just a simple developer maybe just getting started you don't have all this large infrastructure you don't have all this container platforms to play with what are you doing and what does that look like and should you be doing that and I've I've seen a couple of small really small ones where it's quite quite fun to see how it's being applied without forcing it all the way into production I would love to see it go all the way to production so would they they understand that it's a long-term project to change an org we just talked about that but imagine you have your code in in like something simple like github and generating the containers out of that to just run them locally as a developer experience whether it's going on a container platform or not that takes not only the project being in a coded repository that I can check it out mess with it mess it all up and then throw it away and check it out again and start over you now have a complete platform that's repeatable in a container to run your code on that's always going to be the same so yes there's a lot of reasons to be using containers even if it's not going to production right it's just going to speed up your cycle rules got something yeah I was thinking and we shouldn't forget that maybe containers aren't the answer right maybe serverless is already so depending on the the rules and compliancy you have to take to in accounting your particular organization you could even skip containers and go to function as a server or something similar if that helps with solving your problem I'd argue that even a monolith works better in a container but depending on on what you're doing maybe serverless is the best option for you okay I'm going to introduce a new topic here when we look at things like integration and data usage and the traditional having a database or a storage layer somewhere in your organization or in your projects that's a single source of truth you hear that that term a lot single source of truth but as we get towards these cloud native container-based solutions and how you're running stuff like that we're seeing more and more integration along the lines of like event-driven architectures where now your single source of truth is being moved up out of the storage layers into application layers what are you guys seeing around that what do you think of that and and the biggest question that I got asked around this is when do you think this is a good idea and when would it not be a good idea to be using things like event event driven architectures rule why don't you go first this one is all yours so who's going for all this this is this is up your ally you think that again there is a I didn't hear a concrete question but a broad topic but if maybe I'll start with looking at the types of messages you know and how do we get to event-driven if you look at the message types sometimes you won't your messages let's say to be they can be temporal you know you may have let's say something like a stock ticker or something coming from iot so you have messages that can you can afford to lose them so these are short short leave messages then you can have messages that that have to be consumed once and you can have messages that you want to consume multiple times so if you look at from life cycle from the message life cycle you can have different kind of needs then you you look at you you look at even if you look at from that side I would say you don't have you don't need to go directly to what I'll say to event events event driven architects you you still can go to traditional messaging or even interactions that are request response driven so we without messaging so I'm not like a thinker that you should always go to event driven architecture directly you can choose different flavors of messaging you know I want to start from there and I wonder think Raul may have something to add to this before the data side yeah I'm thinking so what I what I see happening and I know that's why I said it was was written on your your uh well written on you uh bilgin was basically mainly around the issue what you have regardless of whether you go event driven or request response in the in the microservice world is the data duplication issue and uh right every every set of microservice every value stream uh has its own set of data and what what helps is if you if you think about it that that the old data stores uh used to just sync via some kind of syncing mechanism in the database and an application developer would never have to worry about well what I'm going to do and where do we going to write my state into and what we see now with the current setup is actually that that doesn't scale anymore and the single database sort of through where everybody is using the common data model it just does not work and what you then can do is split up your data but then you have the problem hey how do I keep my data in sync how do I scale it cross latency zones how do I collect all the data from various sources to give my single for instance I'm working in a call center and I have to see everything what the customer is consuming I don't care if it's in lending in insurance and in mortgages I just want a single overview and that's where you get we're into the really interesting world where integration spills it plays a huge role and regardless of whether you're using Kafka with mirror maker to consume across multiple environments you're reading database logs with the bgm and change data capture to make sure that everything is eventual consistent maybe you're even choosing to go full on always consistent and and play with availability or partition tolerance but regardless of what you're doing is I think the key thing is that you think about what you're trying to achieve and if you're you're certain what you're trying to achieve and if you tested yourself to see if having how do you say a tunnel vision that's the English term sorry I was thinking of the Dutch term and take a step back and look is this the right solution for the job and I see in practice a lot of microservices that are still talking to a common data model and basically are a half SOA half microservices thingy and I think data and and how you structure your data and how you collect your data now you keep in the sink will be key for everybody to make a successful play on application development serverless and containers and all that jazz all the hype words if you want to be successful that's one of the points I would highly recommend that you can come after people have thought okay that brings up a nice topic if we if we segue that change and messages and data left right moving all over the place and then take that into the cloud so now we're taking our cloud native thing actually into the cloud not just our development environment locally and stuff now I know you and I rule have had lots of talks about how how cloud provided a charge a lot of money to move data around what does that look like maybe maybe we can get a little input from from a bilge in about what are you looking at and how are you designing these message systems when they're going to be hosted in the cloud where when you're transferring data around it tends to cost quite a bit of money when it's coming in and out of the cloud right yeah uh yeah that that's a good question now I mean data it has multiple dimensions they call it multiple v's you know but uh one of the uh interesting one is velocity so it's actually but when you move to a cloud one of the hardest things probably is to get your data there and to process it there and if you look at all these cloud providers they they all have uh tools to move your data from on-premise to clouds you know they all have some kind of migration utility from on-premise database to their database but from there on there are not many to move around to other cloud providers to back to on-premise and the the project I'm working on right now is debisium so debisium is an open source change data capture that can help you move data between different cloud providers and that that's something yeah or readers should have a look and then another related area I've been involved is specifically around data abstraction and data visualization and and one of the common needs there is processing data where it where it leaves you know your application may be running on the cloud but actually the data typically will come on some on-premise data sources and you have to combine that with some of the maybe data sources on the cloud and there is a big a need for actually processing data where it is without moving around and and other considerations specifically around security you know can you move data to specific region or cloud provider if you move it there you know how can you encrypt it it's not only about encryption the data on move but also encrypting it where it starts so there are multiple areas to consider we could expand into all of these yeah that's a good point I mean if you if you look at as a rather we're working on quite a few projects right we we've got as Billion rightfully mentioned debisium one of the things I'm most excited about is obviously we can we can read with debisium and keep me honest here Billion it's your product is read the database log and make sure that messages from there can get put on the Kafka bus and Kafka can replicate that cross-cloud no problem and we got mayor maker there's a couple of other options and then you can use those those events to make sure that you have the right data on the other end regardless of whether it's another cloud or if it's if it's the same cloud but a different availability zone or maybe it's on-prem doesn't really matter the fact is you need something to keep that in sync and another thing that that redhead is already also investing in this copper which is also a way to go across multiple clouds and to see if we can connect those together and make sure that communication is smooth and effortless between those the third option would be Submariner and all its associated products where we go layer three and try to keep separate clusters connected to each other so there's plenty of ways to think about about your strategy and how to make sure that you're leveraging the key components of the cloud like Cosmos DB or something similar that are really unique selling points there's a Google has some awesome stuff Microsoft has some awesome stuff Amazon has obviously awesome stuff as well so what you need is actually a system that connect all of these together and use the unique selling points of each and every cloud there and I think if you if you're thinking about your data strategy and like Billion said keep the processing where it's necessary leverage the tool where you need it but also leverage tools like the Beesium and all the others I just mentioned to connect all of these when you need it then you're truly doing the open hybrid multi-cloud and then you're basically awesome and also think about just a little tie into that you have special workloads of stuff that are probably not the ones you're going to want to pump up into the cloud like the analytics and the AI and ML stuff where you're churning over a lot of data do you see them doing that in the cloud or do you see them doing more of that on their on-premise data I see a little bit of both if you talk about training models it's typically something that you need a lot of computing power for but you don't need it all the time so if you look at those kind of things I can see an absolute brilliant use case for the for the for the public cloud and if it comes to running the actual models then I see a mixed area I still see a lot of public cloud consumption and why not I mean it's it's managed for you it works really really well and if you have a good relationship with your cloud provider you can probably get a really sweet deal as well but if you have Silo's watch and install base on-prem and you have the specific hardware to do it then maybe it makes more sense to do it on-prem it it basically depends on where's your data and I'm going to tie back to what Bilgin said you want to do your processing close to where your data is and if the data is on-prem then running the model on-prem with proper hardware is probably better or cheaper well cheaper is a form of better than running it in the public cloud but honestly I think for training the models the public cloud is pretty tough to be Bilgin? Yeah I agree with that just what you described the training part you need lots of compute on-demand so that that's the reason and if you think a bit on the traditional databases they've even optimized both for storage and processing but now there is a move more to store data on file-based storage you know just move it to some cloud service and process it process it there so the storage is really cheap and you start up compute only when you need so there is definitely moved towards data lakes file-based storage formats such as parquet etc this was happening okay thanks a lot guys I'm going to take a little bit of a switch here because I have one here that's like underlined three times a question for you guys let's take it down to I have these monoliths you know this is how my organization has worked now we're moving to look at how do we make this thing ready for the cloud and how we're going to make it container native and how are we going to approach this what would be a good strategy for something like that would you want to just dump it in a container and see what happens and then work from there everybody would love to rewrite the whole thing but that's not always possible right so if you're going to going to face a safe approach over time how would you do this why don't you start rule yeah so I was in I want this one and so actually I'm a little bit so on the one hand I would say put it in the container or even better use open-chip virtualization put your monolith into your container platform sorry cubeford obviously in the upstream community and then start strangling but what you see in practice is that might not be the best solution ever first you need to look at is the monolith actually a bad thing because a microservices is a mono monolith with latency so so do you need your independent scalability if the answer is yes start can I separate my monolith into domains if the answer is no you might get a better result if you just re-architecture it if the answer is yes then you would be one of my key candidates to put it in container or even a cubeford virtual machine put it inside your container platform and then start strangling the monolith and that basically means carving out functions putting them into microservices or whatever and maybe function as a service depending on the size obviously and what makes most sense and if you want to scale to zero and if you've done that then you can slowly ease the monolith into or carve the monolith into a new form in practice I see mostly put the stuff into a container and shove it on the public cloud and call it a new cloud native architecture and a little bit of green field but I'm hoping that that bilgin has better experiences than I am I'm not sure so I'm also very practical I mean when I work with customer typically the starting point would be you know if it works don't touch it let it run as long as it can and then if you if you are not happy with the way a monolithic application works is probably first probably there is quite a lot you can gain even without doing microservices you know if you just try to to see where is the bottleneck in your in your release cycle you know why it's taking three months rather than three days and just by improving some just by doing some scripting and CI CD and automation probably there's quite a lot you can gain just by having a monolith and that's been described in one of the red hat blocks like the majestic monolith or something around that title and if that's not good enough when you want to go to microservices the probably the most common mistake I've seen with customers is being overly optimistic and splitting a one big monolithic application to 50 to 100 microservices and that's typically a good sign for me like probably one monolithic application should be split first to you know 5 to 10 maximum in my view if you go to up to 100 quickly you're going into territory you have no experience with obviously you've been operating a monolith before that and you're getting into operating overly fine-tuned nano services and that's typically is followed up by the realization that you have too many small microservices maybe you should merge them back etc so I would say start very small you know start with monoliths and split it to a few microservices and then you know there is a long journey towards the cloud and several less but that's a good start it's actually to the point it's you're basically trading a single single deployment unit where most of the customers have trouble doing lifecycle management on this on this monolith and then you're adding not not just one extra but you're adding dozens of extras and newsflash it doesn't get that much easier unless you're going advanced use cases and right operators or stuff like that so maybe starting small is actually quite sensible so I'm completely with you on that one yeah I also think it's worth mentioning we kind of glance over it a lot of the times but the the lifecycle management and how your organization needs to be ready so I laugh quite hard when you say hundreds of microservices most organizations start struggling with the first five right I mean you don't understand the concept in the beginning of hey now that I've built this I own this I own the lifecycle I own the versioning I own the api you basically become a little self-contained development unit business to business with the rest of the development units that own other microservices so inside your organization it's as if you're dealing with a third-party api or something like that so you split it out into hundreds all of a sudden these little development teams have too much on their hands and you know the standard organization wasn't ready for that that you see that a lot okay I want to capture that I actually have a use case and the vision was actually one of the examples of work position is it where a customer went to a decentralized model so all the regions of their particular customer have their own databases and their own set of services and now they want to bring it back together to a single point of truth because the fact is if one of the regions has an issue of an outage none of the others couldn't take over because they can't access the data of the other one and but we also can't change the application because the appellation is a monolith that that was built 10 years ago and nobody dares to touch it anymore so this is actually where we're positioning the vision to to read the logs of the database because we can't also we can't sit between the database and the application because we don't know what what might happen and then bring the data back to a centralized point and then start in carving it up the right way so actually we're not doing microservices we're bringing it back to kind of model it's a common data model first to fix the issue that they're not in control of what the regions did and what they did with their data so as william said if you go all out and go all microservices you couldn't wind up like this this particular customer where you have all kinds of small things and nobody's taking care of what what the other is doing anymore and then you're running into all kinds of issue when you want to combine the data back again for for instance your single point of truth um so what he said is completely utterly awesome i just say i maintain a spreadsheet which is that microservices.fail and there are like hundreds of posts when microservices fail and from what you're describing it seems maybe the reverse cycle has started going back to morally i don't know maybe up to level we went we went over the peak of inflated expectations and now we're in the throat of disillusionment right as our last that's a perfect segway into to wrapping this stuff up here we have a little bit of time left so let's talk a little bit about when so you just basically described the process of going out to model this and breaking them up and then wait a minute maybe we went too far let's go back this way um what about when you encounter that monolith where you say okay breaking this stuff doesn't make a lot of sense but if i can improve it a little bit so that instead of having release cycles that take four months eight months or whatever that i can get them down to three months or two months am i happy enough with that you know the the fast-moving monolith like we like to talk about what are your guys thoughts on that is that something you want to leave in place or do you have the realization that hey someday down the road i don't want to get to the point where i have a monolith that nobody knows anything about yeah but in the end there's only there's only three wise men that like when when you look at the problem there's only three things that that matter one is revenue right how much does it bring the second is how much does it cost and the third one what's the risk right and obviously if it if it increases cost but it's a lower rate than it increases profit more cost is worth it right so there's it's not always i need to decrease cost and decrease risk and increase value it just depends right and if you look at it to me i think the trade-off is where it matters so does it make sense to run that monolith as a in a container maybe multiple instances of the same monolith it depends it doesn't bring revenue doesn't bring value and revenue might be a little bit too much money focused but yeah do i get a business gain out of it do i get a business gain out of decomposing that monolith i don't know it depends it depends on every application has their own profile their own way of doing stuff and the only reason why you should do it on average i would i would also look in talent retention as a factor in this and this is something that lots of people forget and that's associated to the risk and if you only have outdated monoliths running on outdated legacy engineering i don't care how much you pay i don't care how awesome you documented this thing nobody wants to work with it anymore so if you make the trade-offs and you keep all of these things in in in in vision when you plan this i think a monolith could do awesomely but i firmly believe that a serverless function with perhaps written in corkis is equally awesome it just depends you're gonna hit the wall one day right we've all been there my my my view on this is i i i kind of like collecting let's call it patterns and what scales in terms of development you know what's repeatable etc and the migrations from monolith to microservices is it's like a it's a snowflake it's one thing that is unique in every customer every application is is different then it's basically they are not actually you know so so many things you can you can call as the best practices and and apply again and again so what i'm getting is probably the best thing is if you can make the monolith run fast uh don't pass it too much and maybe look at serverless and things such as corkis you know that gives life to java in my view again for greenfield projects you know what can you do innovate with with serverless and corkis with new things while you still have your monolith you know running running there and maybe not if possible try not to you know try to spit it into microservices and touch it too much um this is what i think developers love doing you know working on greenfield stuff i think there's a perfect place to wrap it up at it's a a nice segue into saying bye bye to everybody that's joined us um you can reach any of us online we're fairly easy to find so feel free to reach out if you have more questions after this or anything you want to touch on bilgin can even tell you about his book where he's got lots of kubernetes patterns to share with you uh really interesting stuff i'd like to thank you bilgin and rule um for joining us and spending a small hour here with everybody and see you all next time take care everyone be safe thanks