 Okay, so thank you very much for coming everyone especially on Sunday last day probably very few talks very less talk So thank you for being here. So I'm Riviera. I work at Red Hat and software engineer and today I'm going to talk along with my colleague my comacubes also working at Red Hat about composable microservices for streaming analytics and First we're just going to give like a quick historical context or also view where this Concept of microservices comes from it's not necessary Where most people think of then we're going to briefly talk about the cat architecture about streams And then how we can view the streams as this microservices as primitives which you can use to build complex analytical systems then finally We're also going to show some examples of streaming microservices in action so To briefly introduce historically the concept of microservices in that Overworlded sense of the word We can trace it back probably to Ken Thompson one of the previous units and the Unix you also see so the Unix you also see as many interpretations, but here we just like a popular one from Peter Say was and Basically states that We should thrive to build systems which do one thing have components We should work do one thing and do it well this one should work well together and They should also handle tech streams because that is universal interface and As you'll see these principles were originally applied to the Unix system and they work for binaries working on the Unix system But we can generalize them to modern architectures as well. We'll try to show you so a simple example of this unit philosophy is for instance pipelines, right so Pipelines allow us to change the inputs and outputs of different processes and In this case, that's exactly what you're doing. So we're reading an input file. We're passing it to another Basically using a process called cat, which I'm sure everyone knows to read an input file Then we're sending the output of that process to to a text processing program Then we're starting at using another program and then finally writing it to disk so one advantage of this Way of laying out things is this is very composable So if any point we want to do some further processing like we only say remove any kind of duplicate lines We can just insert a command in the middle of this pipeline. That is exactly that so a modern architecture that follows very loosely this kind of a Philosophy of the Unix philosophy is a microservice architecture. So what is a microservice in today's context? So ideally a microservice has many things in common with with Unix philosophy So we tend to write programs or processes that one do one thing and do it well We try to write programs or services that work very well together But this time we're just not using a text stream So we're doing one general using well-defined APIs over the network because that is universal interface so let's just look at some of the Commonalities between microservices and Unix philosophy and some of the differences where are the technical differences? Well, for instance previously we Sorry That's good So One of the differences is before we were handling we're dealing with Programs or processes binaries on this right and those were like their computational units and now we're dealing in microservice world with Some kind of a server. So something that's living It's running by itself and it's communicating via some kind of Protocol like HTTP or RPC, etc So another difference is configuration, right? So a way to configure Common lines is obviously the CLI the common interface so that's the way we pass arguments we pass the data or we pass configuration to the processes and in Microservice world the way we do that is usually we configure a process at deployment time using JSON or YAML Some kind of configuration file and also the API so the API is well-defined that allows us to configure the running microservice at runtime So they say if you have a rest Service then we can actually specify which kind of data format we want back or we can send extra data extra parameters, etc and And Finally, we have the communication so that's quite different obviously in the Unix world we communicate via the pipes or Redirection and in the Unix world in the microservice world. We're actually using the network for communication between the microservices so Sorry So why should we use microservices? So we're just going to briefly because how many of you are familiar with microservices and microservice architecture Okay, so some of you are familiar. So for the ones which are not familiar I was just going to do a very quick overview of what are the advantages of using this kind of So we're going to start with the simplest one, which is the code simplification. So Microservices allow us to reduce compared to a monolithic application the code surface area So that means that our code is going to be specialized in a single task. Hopefully it's going to be simpler Because we can focus on a specific task We don't have to worry about how it fits with the global overview of the entire application We have a well-defined API and that's how we communicate with the world also Which reduces the cognitive load of the developer because instead of opening like a code base with millions of lines You can actually focus on a very well-contained piece of code and you can look at it So it also introduces a separation of concerns, so I mean actually using separation having the separation of concerns we can do Things which are very useful to us as a developer like decoupling unit testing So if anyone had to do a continuous integration on a monolithic app because it's particularly difficult, right? It's very easy to overlook some subtleties and on the testing scenarios or you know There's always something you might want to miss or you never Plan for and if you have obviously a smaller component of code with well-defined API Then it's much easier to create testing scenarios or continuous integration for that It also allows for parallel development, which means I mean if you have like a big team of Developers they can actually work on the services Almost in isolation. So if you have a well-defined API defined a priori You don't have to wait for another component of your system to be ready So you can actually mock data you can you can simulate calls to the API So you can progress your work on a specific surface without being blocked by someone which hasn't finished a bit of a Message monolithic application so this also allows things like Simplifying refactoring for instance, let's say you have a Bunch of services on your application, right? And you want to change the internals of one of them So you find a better algorithm for it or you find a better framework that Than the one you were using so that's quite easy to do in isolation, right? If the API is stable, and it's well-defined then obviously you can just change whatever is inside the micro service and if it works You know passes all the tests and you should work finding the rest of the system another thing which which allows for this kind of easy Simplified refactoring and the fact that you can have a polyglot development, right? Because one of the problems of monolithic apps that you always going to try to find something That's a silver bullet, right? You try to find a run time for the entire system a framework that does everything That's very difficult in this architecture. You can actually pick the best tool for the job So say you want to write a rest server in Python or you want to write something else on Java or using OJS That that's fine. You can do that and in fact if you want to Refacture the code so you want to rewrite Python bit on go or hopefully the other way. I don't know perhaps you So these are big advantages of using micro services But it's not the silver bullet as well. So as with any architecture There are some there are some problems. So for instance, you might have orchestration problems So say you have like a multitude of micro service. You're not just We're talking about you're at 2,000 micro services, which have to communicate between themselves So that can be a real problem to orchestrate them. So right services might go down you need to read Care of the plumbing that might be problem versioning might be also a problem because We talked about parallel development, right? What happens when someone is doing a micro service and it changes Breaking changes, right? You have to know that your services are communicating with the right version of the API Security is also concerned almost in any field You know So you have to make sure that you have authentication set up that services can call what you were made for originally That services have the right responsibilities and of course things like communication should be encrypted and Finally, you have discovered services cover. So in a fully automated Application you might want services by themselves don't know that other services exist So you have to have some kind of registry or catalog to find the micro services So fortunately, there are tools which in conjunction with good practices allow you to Overcome some of these challenges. So, you know platforms such as OpenShift or kubernetes They allow you to solve some of these problems like your persuasion versioning Up to certain points of use and also discovery By, you know giving you tools to deal with this and this couple of good practices should minimize Most of the problems we talked about so another misconception is the fact that Microservices solve scalability problems, right? So that's not true by itself like changing your monolithic app to micro services Not going to magically solve your scalability problems, but micro services being Usually stateless by nature They make they are a natural fit for containerization and in platforms such as OpenShift or kubernetes That means you have an easy way to scale out your system, right? So if you have stateless micro services, you can just increase the number of instances of micro services and you don't have to worry about it Even if your micro services are not stateless if they have to keep state of some kind or the problem is not the actual micro service So I say you have a micro service that performs heavy computational loads You can still use a micro service architecture by for instance connecting it to a spark cluster So I mean if you're familiar with spark with a patchy spark okay, so spark is a Distributed competing framework. So basically what it does is distribute second the calculations or computations or whatever you want to do by several nodes on a cluster and It does the processing in memory and you can scale easily by adding more nodes to that cluster So basically if you have a way of making your micro service communicate through the cluster If you're having a computational bottleneck, then you can just you know give more notes to the spark cluster And it's still using a micro service architecture. So a common No, my culture a way of referring to the stream processing architecture is usually the capital architecture So the copper architecture very broadly works by not Disposing of the the batch processing layer on an application and what you want to do now is deal with streams Which are basically coming from a pending only immutable log I guess the first example that comes to your mind is is Kafka You do something like a patchy Kafka which turns events into streams and basically now our micro services are going to React to the streams in real time. So you don't have any kind of batch layer or we can do micro batches But you know, we're not we're not like storing a huge amount of data processing it overnight results on the next day, so we're doing things with data in motion and with the cap architecture we we we notice a pattern which is since our Stream layer so the the Apache Kafka case is immutable what we're doing is we're consuming streams We're processing them and we're writing new streams So we're not actually changing your regional screen streaming in any way So any of the outputs of these stream processors will be new streams which could be in turn read by other services so Stream architecture also has some challenges obviously and the most of all the I mean, these are just a pick up for Common problems with working with stream based architectures Latency will be one. Obviously if one of your micro services introduces heavy latency on the processing Obviously all the other services consuming it are going to suffer from it Statal transformations are not very very trivial to you, especially if you want to do something complex So obviously if you just want to transform the stream without keeping state That's going to be easier, but sometimes you need to use that for transformations as Always security is a problem if you have data in motion should be encrypted Authorizations as I mentioned etc and stream reconciliation obviously is a big problem because you always going to have failures at some point and You're going to have to deal with them So now I just want to introduce. Oh, sorry. I Just want to introduce What we're wanting to talk basically which is how to use micro services then to build modular analytics architectures so an interesting thing is that Usually when you when you go to a micro service talk you you listen about how you can split Business concerns so you can split business logic like you have the ordering micro service You have the billing micro service, etc. But here we propose to think of them at even more Finally granular level. So it's a really small primitive computation So if you look at the micro surface as a really small unit of computation, there's one really single task you can start finding for instance some Similarities between this type of stream programming and functional programming So as we talked before the streams are immutable, right? So here what you're doing is just applying some kind of mapping of a function to an immutable quantity You know, that's like a fundamental principle of functional programming This This micro service can only be composed, right? So this is like a typical function programming Compositions so you can add micro services which read from one stream writes one other stream and those are the inputs of other streams Has k of transformations and you're creating new streams. So this is another pattern that you find in and for instance, you can you can even think of Referential transparency as in functional programming that means that You can take one of these services and actually replace it by two services That overall do the same thing and compose those two right you can replace lambda 2 by 2 for the services And then we will have the same functionality without changing anything else on the application and I mean you have lots of constructs with micro services that you know Give you like building blocks and primitives like in in programming. So you can have Microservices that you stream splitting right that'll be like a conditional branching. So they're writing to one stream if you set the You have filtering you can have micro services that you filtering you can have reduce operations, right? You can have a micro service that consumes from several streams Combining them again, this is like a stateful transformation. It's keeping some kind of state some accumulator But it's transforming them into a new stream so I'll just Quickly show like a simple example like one of the most simple examples you can think of making this work So this is not going to be a complex application, but here you're using primitives to build an analytic system so as a stream you have some kind of Numerical data coming into a stream. It doesn't matter what it is a bunch of numbers coming in and You want to create a service that's going to calculate a mean and a variance of that data that's coming in in an online way, right? Okay, so this is like a stateful transformation, but you're just keeping like an accumulator basically you're transforming It's like a reduce operation and Then you're going to write that that the mean and variance values to another stream Which is going to be read by a Qsume or a cumulative control chart Which is basically a way of detecting abrupt changes on or drift on a series of values But now we're not trying to detect a drift on your original data. We're trying to detect the drift of the mean So you're basically just detecting we have a stable mean Let's say number of visitors on our website or whatever We want to know if that changes abruptly it might be growing you might be going down But you know if it's not abruptly it's fine, and then we write those values again to a further stream so I Mean Here you can see then some of the advantages like I mentioned for instance the refactoring Let's say that now you're not interested in calculating the mean So we're interested in calculating just the variance, right? So you can just rewrite that micro service to do something else and everything else in the system will be workers previously You're still detecting a drift or an abrupt change on whatever's coming from that stream It's just it's just a different thing now, you know So you don't have to rewrite anything else on your system or if you want Let's say you have like a predictive model and you want to try to predict How that variance is going to look like say in two days or or any other interval Then you just consume from that variance stream You perform the prediction with a train model and a micro service and then you get an output Which is written to another stream and actually you can actually see You can pipe it back to to the cumulative sum and see what does the prediction value actually Changes abruptly so you can see how we can compose and create complex analytic systems or just with simple primitives So as I mentioned some of these services are Stateful one of them is not The prediction one is not but in the sense that it's not, you know, it's even potent in a way But the variance of the Q-sum services they're actually stateful So, you know, you can actually see them as accumulators working on a new sequence of numbers so Now I'm going to pass to my colleague Mike and he's going to give An example is going to show some examples of micro services in action All right, so we did a great job of kind of showing us the theoretical side of this And what I'd like to do now is take some of that theory and show how we're doing it on OpenShift How many people here are familiar with Kubernetes or OpenShift? Couple people okay, so Kubernetes and by extension OpenShift because OpenShift is an enterprise packaging of Kubernetes an open-source enterprise Kubernetes is a system for orchestrating containers across a cluster of physical nodes and so What we're gonna do with this and I'll Okay, so we'll look at This is kind of the generalized architecture that we're gonna play with We're gonna have two Topics on our stream. We'll be using Kafka to do this. Who here is familiar with Apache Kafka? Okay a bunch of people so This we have a broker that's handling this message stream We have two different named topics now We're gonna have a data generating service at the top that's going to push data into our first topic And then what we'll do is we'll show an example of an application that just reads from that Just to take the data. It doesn't necessarily do anything with it And then we'll look at another application that takes the data and does something with it and place it out onto the second And we'll look at another application that reads from that so we can kind of verify what's going on Delay between one a Okay, so but first I'm going to talk about the Kind of the technology that you'll see here So we're using OpenShift as our platform because that's kind of the developer platform that we like to use We're using Kafka as our message broker and Apache Spark as our compute cluster and Spark gives us a very easy way to Generalize operations across a distributed cluster of machines And we're using that in a containerized way so that we can really leverage the power of Kubernetes To give us a scale-out potential kind of across an abstracted hardware layer and then lastly I'm using Python for all the samples I wrote here just because I really like Python and I think it's kind of easy So how many people here are familiar with Python? Okay, now how many people are comfortable enough to look at a little code from Python? Okay, so this will be the first Example that will dive into when it I know if I hit it again, it'll go too far so This is the first application we're going to look at our data stream is going to be a series of random numbers Okay, so this is kind of like the mean example that we was talking about before a filtering application that's going to either Filter out all the even numbers or filter out all the odd numbers and it will play those onto a second topic So I'll show you real quick the filter function that I'm going to use and I'm using There's a wrapper application that I actually have that's doing the spark interaction But at the core of that wrapper application this is the function that I'm going to apply to everything that comes across the stream and because Things that you know what the data that we're dealing with coming off of Kafka is in a string format And we're going to play back on a string format So this function expects some sort of stream value and then it will return either a string value or a none Depending on if we want to play the value onto the second top So what we're what we're looking at here is the console of open shift and each one of these Pods as they're called here is a view into a container But this one called the generator is our random number generator and if you can see it's got a blue ring there That means that container is running right now And if I click into it you can see a little information here And I want to show this to touch on some of the topics that Rory was talking about you can see up here It says what the image is and there's a hash at the end of it and it tells us what build it was and You know where the source is actually this was built directly from a git repo and then deployed into open shift So I never built the image open shift built it for me And you that source line is actually the last commit message that came out of there So you can see how I'm already having a view into what version of the software I'm running so it's very easy for me to monitor that and then this at the bottom is telling me a little bit about What's exposed into this application? You know networking-wise and whatnot Now right now if I look at the logs for this, I won't really see anything because it's not printing out It's maybe has some error logs or whatever. That's all it's printing out So I'm going to start the first application here Then I'm calling the listener now. I didn't know how much time we would have for this So I built this application ahead of time and on open shift and I've scaled it down So it's not running what I'm going to do is scale it up and as it starts to run right now It's pulling the image and it's starting it I'm going to go into this pod I'm going to look at the logs that are coming out of it and I'm actually looking at the wrong topic right now So what I need to do is redeploy this so I I was playing with this to try and get it working the way I wanted to and The way that we're injecting Information into this application is I'm using environment variables inside the container And so Kubernetes allows me to modify what's going on inside that container I'm using those environment variables to tell it where to read from So the first thing I'm going to do is tell it to read from a topic in Kafka called numbers Which is my random number topic This is one of the really nice things, you know that I like about open shift in Kubernetes You can see that it's automatically redeploying the application for me and it's handling the cut over the networking to make sure that it doesn't start Until until the other one is over Okay, so we can see Is that too small can everyone see that You can see it's spitting out numbers here and this is all this is the log part This is just the raw message we're getting from the stream And so what I'll do now Is I'll start up our Filtration service and I'm calling it evens filter because it's going to filter out all the even information all the even numbers now Ruri and I work on a project a community project called ran analytics, which is focused on Creating or helping empower people to write these kind of machine learning Applications in containerized environments and what we're going to see is as this application starts up It is it's still pulling the image. It looks like or perhaps. Okay, so it's pulling the image because When I built this I'm running on a seven node cluster So there are seven VM or seven, you know virtual machines that I have running in Google's cloud right now And that's the the cluster that's running this we use this for a lab earlier So I built this image and the image was built on one of those nodes But Kubernetes scheduling has decided to run it on a different node this time So the image had to be pulled to that other node. So it's probably running now and As we see it start up Some of the tooling that we created is going to automatically deploy an Apache spark cluster for us and Bind it to the application. So right now it's spawning the spark cluster and Those those images are being deployed and if we look at the the logs for this What we'll see is that? Right now the app this is the tooling that we've injected into the container So this is actually happening before your application runs and it's waiting for the spark master It's found the spark master and now it's waiting for the workers to come alive And so the workers are starting to come alive now and once the workers are alive It will start running and now it's starting to process the stream and apply that function to the stream So let's go back And if anybody has questions about what I'm doing, please stop me and you know, it's perfectly fine We can discuss any of these topics. So I'm going to go back to the listener now And I'm going to put it back on that evens topic so I can see if I'm actually getting the evens information And we can see again, it's it's redeploying for us. It's pretty quick because this app, you know starts up very fast This is kind of goes back to what we were talking about with the do one thing do it well It's it's very small so it can start up very quickly So hit the logs And now what you can see is okay first of all we've changed the message So now I've structured the message that I'm playing on to the second topic Maybe this is what our this is how we've defined our API and you can see it's just pulling out All the even numbers and filtering so this this is pretty simple And this is a very basic example But I hope you can see how you might be able to use this to do something a little more complex And that that's what we're going to get into now. So let's look at something more complicated transforming data and actually Analytics to what we're doing. So in this example What we're gonna be getting is these Complex messages that kind of look like a Update right like maybe this is a Twitter update or something You know we have an update ID and a user ID and the text that came You know, I don't care to take another buddy hashtag Thursday hashtag Halloween So someone ate too much candy or something and they want to tell the world about it And what we want to do because perhaps we're the social media provider We want to do sentiment analysis on each one of these to see if someone's saying something positive or negative Or maybe we want to learn if people are using Phrases that break our terms of service or we want to find out if there's harassment going on And so we're going to use analytics to create this sentiment that will tell us what it thinks about that statement And to do that We're going to use spark again but this time we're going to use two python packages called spacey and Vader and one of them is a Language kind of deconstruction Tool that tokenizes the language into the parts of you know sentence and the other one is an analysis tool that tries to make up He tries to determine the sentiment of what's going on and what we're going to see is that you know My workflow is very similar to what I was doing. I have the generator running in a different project I have another listener and these are you know some of these applications are things that I use over and over again So you imagine the cat tool or the LS tool in your operating system These are tools that we've built this listener is a tool that we like to use with Kafka because it's very easy to deploy this Microservice and then I don't have to worry about like SSH and into a machine inside the cluster to get a command line to read the Kafka topic Because that's all hidden behind the networking that's going on in OpenShift. I haven't exposed any of this to the outside world I'm only going through this console to see what's happening Now if I Hopefully I didn't misconfigure this one. I did So I'm gonna have to deploy it again and what you know what happened here Is that I didn't I didn't reset this to look at the source. So let's look at the source to begin with And you can see like you know one of the things I like about this is this this app is just very quick to redeploy So it's it's not as quick as using the command line But you could imagine, you know, some people talk about the cloud as the new operating system, right? So if that's the case then these microservices are starting to become the utilities that we'll use, you know to pipe together data So look at the logs and I'm gonna I'm gonna pause this because it's going back pretty quick, but you can see We're getting that we're getting these messages like we saw in the slide, right? There's the update the user and some information to go with it. So now What I want to do is start up this sentiment generation And I'm just calling it the transformer here and It's prop unless it get randomly hits the same note again. It didn't oh didn't so you see this one started very quickly And it's already spawning the spark cluster for us You know one of the nodes is already so while that's happening I'll I'll relaunch my listener again to look at the the new topic and as soon as it It may already be Jenna. Okay, so it's not receiving any information yet But what's happening right now is like we saw with the other application. It's waiting for spark to spin up You know and the other application isn't doing anything yet, but here we go. Okay, so the information is coming across I'll just stop this quickly You can see now the message is there's our message, but we've added the sentiments, right? so the application that we wrote is used the sentiment analysis on our spark cluster to apply this and The reason that we're using spark to do this is because spark is designed for distributed processing And so we can do a lot of generalized processing and it gives us a very easy way to horizontally scale that so I in this case, I'll do it manually, but there are tools that will automate this for you and Let's say that let's say that I was Twitter right and I'm getting Thousands hundreds of thousands of messages a second coming in this this stream is operating at 10 messages a second It's not very fast, but if that were the case It would be very easy for me to just scale up So we say I'm going to scale up my spark cluster to have four pods now running right so be this This dash W are the workers, you know spark has a master worker kind of relationship So the master node is controlling the workers and it's distributing information out to them work to be done So now if the stream gets faster, we can scale this up and if it slows down we can scale it down And there's a project that we have on the on the red analytics site. That's actually doing this automatically It looks at logs coming out of spark and it sees when spark says I've got too much pressure I need more executors to execute this and it will tell kubernetes or will tell OpenShift Up the number of executors until it sees that pressure to go down and then if the pressure goes off it can reduce that number So you can start to get automated scaling inside of your application pipelines as it were and this you're not really going to see any You know output from that because the stream is just not going fast enough to really do anything But you know what you what you could do and this is we've exposed this You know if we needed to examine what was happening This is not looking into our spark master and we can see you know kind of what's going on it's using up like it's Trying to grab an outrageous amount of memories You know four quarters running on each one of those so that you can start to introspect into what's happening with your spark post So at this point what I might do now that I've created the second transformation stream is perhaps I'll build applications to go on that other stream Maybe I'll pass it off to another group so they can do it And at this point what happens is? The messaging that we've the schema that we're using in that topic becomes an API And so if I publish that schema within my organization and I publish the brokers that this information is coming on We could allow developers to get access to this and experiment with the data to you know Take it to different groups and show off different ways of interacting with the data And it helps to go into something that Ruri was talking about which is parallel development You know so Once I've set this architecture up Maybe my group is interested in doing the sentiment analysis But perhaps there's another group that's interested in doing a more complex analysis on the underlying Data that's there they could use the same data stream and start creating their own applications We can do this completely You know alongside each other without needing to communicate about the API is between our applications The schema that's on the topic is what becomes important at that go back here All right, so Let's talk about some practical concerns that come up when you're operating in this type of environment, right? Like I just said message formats are going to be something that they become the API So in the past we might use something like if we're creating a rest server you might use open API Are you familiar with swagger or open API? So this is a format for describing an API for a rest endpoint. You might use GRPC which has very structured Templates that can tell you what the structure of the API is but these are programmatic API's Again, the messaging formats now become kind of a de facto API or a soft API, you know This is the schema you can always expect It'll just be one number and these numbers are coming from you know sensor data perhaps on a specific device That's in the field and so you say well, we've got buses all around town. We want to monitor, you know the some sort of G-force sensor inside of it to see if there's been an accident or something, right? So you could imagine a stream for each bus that exists in your city and Applications on each one of those streams doing the mean calculation looking for big deviations, you know As long as the message format is the same and you publish that internally Many different groups could be like creating value out of that data Another thing is brokers and topics and kind of configurations So this kind of goes hand-in-hand with the message format schemas is it these will be something you'll need to publish internally Or maybe you don't want to publish it internally Kafka has a very deep security mechanism on it that allows gating for who can have access to which streams and and these are ways That you'll have to kind of share the information internally so that all your developers can either get access to it or not have access to it also data provenance and this was something that Riri touched on a little bit which is If I have a message come into my system and we saw with synthetic Social media data. I had like a user ID and a message ID, right? If I just took the text and added sentiment to it and then got lost the user ID and the update ID Those next applications wouldn't on the next stream wouldn't know where it came from They wouldn't know maybe how to look back and say what what user is this? What is this user's history? You know, do they have a history of making really negative comments or really positive comments? You know, those are things that your applications would want to do to bring the data together So you'll have to be very careful about how much information do you persist into the other streams? And then getting into testing can be a really interesting situation as well because you want to have good data streams to test from And it's not always easy to use the data that's coming live But Kafka makes it really nice and that you can replay old streams So you could test against streams that have already come in to say like what if we want to change a Machine learning model that's doing some sort of predictive analysis We could take some data that we just saw the day before Run the new model against it and see if anything's changed And you saw how easy it was for me to manipulate open shift and kind of deploy different applications I could create those testing applications in separate projects and have them run against live data And then I can do kind of like a B testing or blue-green deployments where I can you know Determine which one of these do we think is working better for what we're doing? And I think you know debugging is always tough It's even more so tough in that when you're in the cloud You could see how my access to those containerized applications. It wasn't like I wasn't get shelling in I was looking at logs. I was examining their their health based on kubernetes So you'll have to think about that as you're making your applications, you know, we always want to be putting good logging in our apps We always want to be putting out good, you know, exception traces or whatnot That becomes really important when those logs are your main view into what's happening with the application So I want to just review what we talked about today We talked about Kind of the Unix philosophy and how we think that's important to microservices You know kind of chaining from one to the next and using an easy format between them We talked about microservices and how you know, they we think they should be contained in a way Where you've got to structure with some sort of API a clean interface and the applications sitting behind it And we also talked about the Kappa architecture and how you can really leverage streams and the applications that sit between them To to really unlock your data and transform it in interesting ways and so this QR code will take you to a github repository where you can find Code samples for everything you saw today that I ran here and there's instructions So you can you can deploy this yourself and experiment and red analytics.io is the community Project that we work on you can find lots of tooling for OpenShift and Spark and a lot of tools that we're You know really trying to empower people to take these things and have fun and you know play with the data and make Here on things and if you'd like to get in touch with myself or with hurry here's our email addresses and our Mastodon IDs if you're on that network So I guess with that any questions Yeah, yeah, go ahead The Kafka cluster was already is already running actually So Kafka well I want there I had another project on there called Kafka and in there the brokers are running in there So they've actually been running, you know all day pretty much. I didn't just start it up now, but There's a project called Strimsy at Strimsy.io. They've done some great work, you know We talked to Jacob if you want some more information The it makes it really easy to deploy Kafka into OpenShift and into Kubernetes and you just take you deploy these scripts and within a minute or two You'll have a Kafka broker up and running so it is very I could have deployed it here I just didn't want to push the time good good question. Thank you. Yeah. Yeah, go ahead Yeah, sorry, so you're saying I don't know Any other questions? All right. Well, thank you very much