 Maybe just a quick question. How many of you have written the Meso's framework or DCO service before? Max yes, I know All right, there's one more hand pretty cool. So I hope we can show you like see Basics and actually like this talk is going to be focused more on the challenges Which actually arise if you want to write still stateful services, but we'll also actually show Some approach making it really easy to come up with new frameworks or new services So for all of those who haven't done it yet, it's going to be like an easier start then for example I know like a regular DB. They were one of the first ones. They had a pretty rough time Which we also going to see throughout this talk. So us that's actually Ken here Ken is a distributed systems engineer at Mesosphere and it's a dear colleague and He was actually one of the persons who started this SDK We're going to talk about it brings this talk and I myself and also distributed systems engineer So I mostly work in the Apache Mesos project But I also like go out throughout the world talk about stuff and actually help partners like for example Max over here Writing those services and frameworks So this is why I also really like to see that we have something like an SDK making it easier for people to get Started and actually allowing us to get more services running on both missiles and DCOS All right, maybe the first question the title of the talk was actually Writing stateful frameworks writing a stateful services. So maybe the first question Why does this matter in such kind of like micro service world? I actually just looked at the top X Docker hub images And there's actually a pattern like if you check like a year ago or so There were like really few stateful services, but over time I actually marked all the ones which have like some stateful service or some stateful containers For example my sequel MongoDB the Docker registry So actually even like in the Docker store is Docker hub a lot of these Repositories a lot of the images are stateful images So we're seeing that despite we are moving towards this micro service world still state is a really important and as we'll see state is also pretty hard the second saying which we if we talk about state Is actually that we see like a lot of those big data frameworks Which was actually also one of the initial motivations to come up with something like missiles to basically enable all those Frameworks to run like on the same cluster So actually like the challenge right now is how do we combine all those stateful services together with potentially all those Stateful containers, and this is what we're going to explore a little bit in this talk Talking about all those stateful frameworks. There's actually like a pretty Nice term and pretty high term right now, which is called smack stack. So this smack stack It's actually it stands for a spark Mesos Christmas is being used to run all of this It's Mac then we are back up which is used for like actually writing your applications in it's like an active base framework for Java Then we have CC which stands for Cassandra, and then at the end we got Kafka for like see this group message queue So this is like a pretty common stack right now. We see people deploying for analyzing or Processing big data sets or like more also like in the screen processing world where they want to analyze data and challenge We have then is if you want to run this so basically without me So so if you just have like a sex deck then you actually end up what we see also with a lot of customers You actually you end up with like a sub dividing your cluster use partitioning your clusters So you're having like a number of nodes where you would run Your spark stack you would have a number of nodes where you run your Cassandra stack and so on and actually the resource utilization pretty is pretty low as you really need to get this for like the maximum you like maximum performance and so you're actually wasting a lot of resources and if you're Actually going through this smack stack and this is also why I believe it's like such a trendy and high topic right now You can actually consolidate all of this and with like missiles You're having missiles underneath or DCS and you can actually run it in fewer notes It's been actually share those nodes between the different services But now we've seen yet. This is pretty cool We can reduce all of this but it actually means we have to write a framework schedule for that We have to write framework schedule is for Cassandra for spark and all those services It's a good news for you is actually for the smack stack. We already done that so that's already like either being Finished or it's still like in the process So that's nothing you personally have to be concerned about but actually what happens if you want to write your own stateful service your own stateful framework and As that being said, it's actually is pretty hard because in general distributed systems You have to deal with like so many different failure scenarios and I really like this quote like Murphy's law for distributed system It's not that just that they're simple failures where you can detect. Hey, this note just failed or There's a network petition often. It's like this partial or failures Which are really hard to all imagine up front when you're designing a framework and all basically being handled and so Actually, it's really hard to basically figure out those all those failures or partial failures in the distributed system in General did coming up with the distributed system. It's already quite hard. So this for example, it's the architecture of a regular DB, which is like a distributed database and we see we have like different components We've got coordinators. We got database servers. We got an agency So there are actually a lot of components which I need to Individually scale which have different failover semantics. So for example when some agency fails I need to do something different than for example as a different database server fails so first of all coming up with all of this what Transitions you want to have and what you actually want to do is pretty hard by itself and then actually Writing that as a mesas framework or as a framework on like any distributed scheduler It's another challenge because you actually you have to codify all those different failure scenarios If you set set it up like manually, you would have like an operator manual Which tells you hey if the agency fails you need to do this is this if a database server fails you need to do this is this and With a framework or service you actually you have to codify all of this knowledge in the scheduler. So that's pretty challenging and You might also have to deal with other like problems. So for example Placement constraints if you have a stateful service, you might also want to co-locate for example data So if we take for example who do as an example Whenever you run to do one HDFS one of the core ideas is that you actually you co-locate the computation Where like the data resides in HDFS? So when you have like the stateful services or also then the data processing on top you actually you end up with a lot more placement constraints than you would with like stateless services if In contrast be compared this was like a stateless services This was a stateless service failover would be really really simple It would basically be my engine X failed. It doesn't have state I can restart it on any note in my cluster, but it's different if we have like stateful services So with stateful service, I really would like to reuse a data So I'll probably I'll try to restart it like on the same note. This isn't an example to Showed how different like stateful and stateless services are stateless services. They are pretty easy still challenging but stateful services are a totally different story and Missus is actually doing a lot to help you with that. So especially like from a storage perspective Missus is providing a number of primitives which help you to deal with storage and They're like mainly like three categories. So if you start your default missus task, which is going to happen You're going to write all your data into your sandbox that sandbox once your task fails It's basically it's gone. So you're not you're not able to recover your data It's different if you start using so-called persistent or local persistent volumes They're actually there are no local storage. So whenever your task fails when that note gets rebooted You actually you get re-offered the data you previously written to that volume, which is a pretty nice thing especially if you have something like for example Cassandra Cassandra internally already Replicates the data. So you can't actually live with if there is like a total note failure If that note doesn't come back up That data would still be lost because it's still it's just a local persistent volume It's not stored like any distributed way. It's just stored on that note But as Cassandra internally actually replicates the data, we're happy with that because we can be replicated We don't have any lost data as we can always use one of the replicas If we're using like legacy softwares, for example a single MySQL instance or a single post-press instance We want to run in our cluster similar as we did in like the pre-distributed systems world It's slightly different because if we only write the data to single note and that note where I was running My MySQL instance would fail actually all my data would be gone So there actually need to do something different. So I would use something like external storage Which can either be like for example Amazon EBS or it could be some distributed file system underneath I used to store my MySQL data in. So this is just to show you that Mesos actually offering different solutions for different kind of applications because many applications have slightly different considerations or slightly different requirements when it comes to storage and Going into detail like why it's so hard to actually write a framework Using those local persistent volumes. Again, this is actually a state diagram We came up with a WranglerDB when we initially came up with the WranglerDB framework Which is similar as the class of like Cassandra frameworks So Cassandra and WranglerDB, they use those local persistent volumes because they actually have internal Replication and they can actually live with note failures, but we still see it's still pretty hard to basically make sure That we end up with those local persistent volumes because what you actually have to do you first have to reserve resources You have to make sure that you own basically for your role for your framework So first of all the disk resources, but you also you have to be able to still start the task on the same node So you also want to own CPU and memory resources on the same node and actually it's a lot of state transitions Writing up that framework actually we ended up with around like 9,000 lines of C++ code And this is just really really a lot for just writing like a scheduler to distribute like a database within your cluster or to deploy a database on the Mesos cluster and When seeing that that was actually like really motivation for us where we reconsidered this Initial idea of Mesos so Mesos was initially also written to be an SDK to write distributed systems And at that point you realize it was a great SDK for writing Stateless services or stateless frameworks, but now when we adding all those Primitives we might actually want to add something easier to use or simpler to use for framework writers And this actually brought us To the SDK so This was the motivation to come up with the SDK and I would actually briefly hand over to my colleague Ken here because he's going to start the demo because it takes a while to deploy and Then we're going to talk about what the SDK is actually doing as So as mentioned, I was a part of the team that Started creating this but I Maybe six months ago or so and it's taken on a completely new set of features, which is really cool Probably the the big brain behind it is if you're familiar with our team is Gabriel He's out of a tremendous amount. So if you're engaged with the community when you see his name come up He has a lot of value to add into this the other thing I'll add in before I jump into this demo is that You know what we're doing with the SDK isn't like a scaffolding builder Which is one at one way to manage things right you could actually build up scaffolding based on State or what you seem to find here in the slides But another way is to actually provide default behavior like the behavior is there you're configuring that behavior So let's get started. I am right now. I have a cluster established Which is what we're looking at here. It is an ECOS, but under the covers. It's it's mesos We have established a custom universe and so part of that Includes a framework that we created with the SDK, which we've just called hello world Of course, you have to have a hello world, right? And things are about that. Did you make the sacrifices to the demo gods? It doesn't feel like it Let's see here real quick. Yeah, we're still online. All right. Oh It didn't happen So Real quick we go into the repositories if you know a little bit about DCS. We have a We have this guy here Which is the whole of world local which is established repository which has the SDK application in the service So because of that if we look in the universe one of the options down here Is the hello world? What I'm going to do right now is local to that environment. I'm in a vagrant instance. I'm actually going to just say DCS Package install hello world, which is the package name for this thing and we'll just say install this thing I'm going to show you a lot more of the code behind it What the beautiful part of Bob what we're seeing here is the installation of a service that was created with the SDK and core to this And we're going to jump into more details here shortly is that this framework the main for this is one The code for this is one class and that class Leverages a YAML that defines the behavior of what we're seeing here including all the commands that are written So we'll go into much more detail and that's I want to emphasize one point. This isn't This isn't YAML replaces everything. This is this is an option We have the ability to recognize this YAML file to create some behavior around certain features And we can also add code to this so we have the ability to configure things with default behavior We can add code to this and the end result is we're launching off the service if we go look at the service This is going to take a few minutes to deploy so while we're watching or waiting for that to deploy We're going to continue on with the presentation. You can see here that it's staging by the time we get back to this We'll have several this is staging the service which will then launch off several tasks We'll look at those tasks and look how we might be able to make some configuration changes either on the fly within the DC West environment or As a developer creating a whole new aspect of this framework Thank you very much for showing this and Yeah, let's maybe briefly dive into like what how the SDK is structured or what it tries to solve so it's actually it's built on top of DCS and Mesos and It basically it gives you like This approach a Kendrick's mentioned that it's basically gives you default behavior to write services And we'll see you actually you have a lot of options to even specify more Detailed behavior if you need something more special But like the general ideas it should be as easy as possible to basically cover the default use case default use case of writing a service and So this is what's meant by having like a default scheduler out of the box for Providing stateful services and if you're using that state of default scheduler It's actually sufficient to just write a YAML file and that actually is going to create all the code You need for your framework for your DCS service And as we saw we can then just install it with like this one command to see us package install my new package I just created all right, and This is exactly like see her of it as I like to call it Showing it that simple things should be simple So if I only want to deploy like a single container it should be it shouldn't take me like more than 50 lines of code or 50 against lines of YAML for example If I want to do more complicated stuff if I need like certain failover behavior, for example because I'm writing like a really complex dependencies and I actually need different failover scenarios for different services running within my service I actually I might have to write some code and those could be like custom plans or custom strategies and Actually, I also have some choice of not really utilizing the SDK to generate all that code for me But basically if I'm deciding I might want to write my own framework I might actually start and look at and just take for example the offer evaluation classes from within the SDK and still write my own Scheduler but basically utilizing a lot of this common functionality So when writing your own service, you'll see it's actually And you actually write several services you'll see it's like the basic parts they are often like the same and you can basically take those classes from the SDK and Avoid writing them yourself There's actually like a force option, which is not on here This would basically be writing your scheduler from scratch Which is for example, what we saw before with the example of a wrangler to be When the wrangler to be started this was in the round So you still have like this option if you need something totally which doesn't fit into the SDK You maybe you want to write Marathon for example Marathon would be like not such a good fit for this SDK is it's really like focused not for meta schedulers what Marathon is but more for like really a stateful Service in that case is you it could still be Send it could still make sense to basically write it entirely from scratch Which is like adding even more lines of code you would have to write but if you're within this domain and As said you just want to have your default scheduler. It's a simple demo file. You don't have to be concerned about What was actually going underneath you don't really have to understand too much What's going on in DC us underneath is you just specify for example run this one docker container for me if you actually if you're dealing with Custom plans and strategies you might actually have to write a little code and you should understand Why you need these custom plans if you need some more Understanding of your application you can't just take like a doctor image and save run it You should really understand what's going on underneath What is the dependency between different parts of your applications underneath? But you still you don't need like specific knowledge underneath like how does Mesa's Allocate all those persistent volumes Do I need to reserve resources first? This is basically all taking care of you by the SDK and Actually this upper part like the default scheduler and see custom plans and strategies This is what we want to focus on in this talk as down here like writing your own scheduler That's probably like at least it on its own pro like more than 40 minutes Because it's a really complex topic. So we'll focus like what you can utilize the SDK for to generate all this code for you And Here we actually we have a simple example like a hello world. It's basically it's just writing how a world into like a persistent volume we Allocated and like the interesting part here is this is all specified in YAML But what's actually happening underneath and we'll see this on the next slide It's actually it's the same I could use the same in Java So I don't really have to have this trait of like either YAML, which would be more like a configuration fun or Java but it's actually It's it's both because the YAML is not used as a configuration found it's actually used to generate the same Java code we saw on the other slide here Looking at the YAML for example like this part down here where we create the Persistent volume this is like this would take so much more lines if I actually have to do that myself because as mentioned before Is this actually implies we are reserving resources this implies? We're creating a persistent volume and here it's basically like a really nice specification and The code is doing all of this will be generated for us in the background All right as mentioned here's the same in Java So I can actually do everything I can do there. I can also do in Java and There actually you have more options if I want to do like more detailed plans if I want own recovery plans for example and Plans it's another really interesting topic. So We we saw before we specified like this task. So let me just switch back maybe so here Here we actually we have a task called server and the server actually has a goal that it's running and If we now go to this plan We actually see it's Referencing certain steps. So plan is consisting of certain steps And they can those individual steps can have different goals So for example the goal we saw before it was that it's running running actually means that it Keeps on running as well. It's like a long-lived thing. They can be other goals as well as for example the format stage here Format is something which you want to do once in the beginning. We don't want to keep on formatting our Persistent volume over and over again. We want to do that once in the beginning and set up everything. So that's actually task Which would have the goal finished and finished means it's done once and once that's finished I'm happy and always good Just I Maybe it was clear but like a normal strategy when you have something like this is a great example This is HDFS right in order for this service to land somewhere in a mesos cluster There's a certain order of things that it's important that that order is followed exactly So if you're not familiar with HDFS the first thing I need is three journal notes I mean if those landed on three separate notes, that's a strategy for how to land those things That's codified somewhere, but you can see here that the first phase is the journal phase The next thing that needs to happen is a name note needs to land and there's certain rules for that Those name notes have to land first one at a time and to they have to land on a journal note So the strategies for how that happens is codified somewhere, but it's detailed here the order of things So the concept that we have within the DC us SDK is that you have a plan and that plan has phases and those phases have steps And that's that's what you would detail on the order of things and you can see here That it's all defined serially at the bottom here You can see that there's a strategy for the data node, which is parallel So these things have to happen in a serial order that it's important to maintain the order of those things But as soon as we have kind of the infrastructure of these I'm sorry the infrastructure of HDFS up and running In other words journal nodes and name notes are in place that they've been formatted and then we've bootstrapped the second name notes So we have fault tolerance Then we can have any number of data nodes and we don't care how they land We could land 20 a minute of time and nobody cares right from a from an infrastructure standpoint So that's that's a little bit more detail on specifying plans and that becomes an important concept for how things are managed within the SDK Thank you, and I actually think we're gonna have a slide Further through this Actually failure recovery like this when we saw here It's it's like a great way of setting up the service Well actually once the service is running or even while we're deploying this we have to deal with failures because failures are just inherent to distributed systems and That's actually why the SDK also gives us like a number of options I can specify the failure recovery or how I want to react to failures So in general, they're like three kind of stages or pools There's like running and once it's running all this good But there's also stop and there's permanently failed Stop could for example be I simply cannot reach it at the moment and then there's a failure monitor Which I can actually also specify basically saying after a certain time out I want to this pass to move over to the permanently failed one and then I actually The recovery manager is going to be responsible for it keep starting another task Which I can also basically deal with how I want to do that because it's very different So for example, if I have a Cassandra node failing, I might have different considerations How do I want to be replicates a data? Do I maybe want to wait longer for that to come back up and once I'm in the permanently failed state How do I actually be replicates a data? So I end up with three replicas again after one of them failed So those are like all points where you can customize the behavior But for all of those steps, there's also like default behavior So if you have like some simple where you can just like restart it You don't have to worry if you have something more specific first You have like all those dependencies and considerations how you want to be replicate data You have the option of doing so as well and Here we just have again an example for my for HDFS Just showing how easy it is to basically have customized failover recovery. So again, it's like it's a plan We generate and down here. I hope you can see that back there. It's actually it's just generating these steps We want to do for this failure recovery. So and it's actually it's Same steps. We would do initially when setting up the system. We would bootstrap and Then actually do the step of server as well and there's some code omitted but if you just look at You can actually see that in detail and that allows you to really specify how you want to do failover recovery How does it work internally? So we already talked a lot about a scheduler's tasks and persistent volumes. So maybe just to For those who might not be familiar with those terms a scheduler is basically this component Coordinating a service so like see distributed version of like the coordinator running in a cluster And then we have tasks which are like running on individual nodes and for persistence They might have those persistent volumes We talked about in the measles context Which will be still available like after the note savior after my task to get restarted And so these are like the basic concepts We also saw like this our hello world scheduler and once we have this scheduler was in DCS we actually use marathon to launch all of them in like a All tolerant fashion because also the scheduler could fail right and in this case in DCS Marathon is actually gonna recognize. Hey my hello world scheduler failed and I want to restart it so that's kind of like the inner system for your cluster in this sense and the main idea of This SDK is it actually gives you a declarative way of specifying what you want We saw this young file before here We actually we made it a little more complex as we split like it into two Steps and two parts. We split it into the hello and into the world part and Imagine now we're specifying here an update. So as mentioned, it's declarative So I'm not saying like how I should switch over there, but I'm basically saying I now want Highlighting was dropped. So actually what what I'm saying is I'm changing the CPU requirement for the server Down in the hello pot and I'm actually changing the instance count in the world example And so one now I specified what I want to reach and what's the SDK in this case will do for you It'll generate a deployment plan and this deployment plan is actually Will consist of two phases. It's first going to be the update for hello world when we change the number of CPUs And it's gonna be an update for the world phase where we increase the number of instances and those are like two independent phases and the interesting part is also what happens if there are failures in between so just imagine We have done with our hello phase. We actually updated the resource requirements and now the scheduler fails What's going to happen is business is going to do like a task reconciliation at the Firstly scheduled will be restarted by Marathon. We're going to do a task reconciliation and we'll actually recognize Hello, this updates phase was complete. This is all like what we wanted to reach that we are we are in our target state and we actually we just need to consider like the second phase here the world phase and roll that out and so internally from from like a structure how that's happening we actually having a plan coordinated who's like Generating them and managing them and we actually we're not just having one plan within the system So the two we saw before we like the deployment plan Which is like the initial deployment and then we also have a plan to deal with failover again They're like also given their defaults given to you But you can override them as well in YAML slash and Java when you want to specify more behavior and Down there. It's actually then two steps are generated and the plan scheduler will actually take care of executing them and Making sure that all this missile stuff down there on the right like see offers the generation of new Reservations of new persistent volumes. It's all taken care of you and you don't have to worry about that in your steps What one important aspect of what you see here is that those two plans are run concurrently? We could be in the in some phase or in between phases on the point of plans have a have a failure that occurred previously Which is required a step in order to continue a phase The recovery plan will kick in because of the failure and then it will continue on the deployment plan. Those are concurrent plans It's important to recognize it And actually it's not just if I want I can actually even specify more plans So I'm not just restricted to deployment or recovery plan. I can't even have more running in parallel other features So actually it's even giving you more Than we saw here So it also gives you like placement constraints, which is similar if you have work with Marathon before where you can spread out stuff We could say unique host for example It gives you something similar and it actually also integrates into like the DCS load balancing aspect So it's really easy to integrate into this ecosystem DCS is providing to you and The in my opinion nicest part is basically like this fail-over recovery that it's really easy to also plug in certain parts if you have more special requirements then the default given to you which often is a case for if you have Failing tasks or want to deal with failover So there it's really easy to specify like your own set of implementations And That actually now brings us to the demo. I want to see deployment has finished right now, but so One other added aspect of what we're seeing here is that When you when you're using the SDK There's the plan execution by default when you're provisioning initially provisioning It just follows the plan like when it succeeds a step it moves to the next step It just keeps going but there are times when like you do a configuration update Or maybe an upgrade you actually might want to pause you may want a human involved during a certain process Right, so you're upgrading to something you may want to upgrade a node and then have a human or or some kind of automated test That would verify that that Transformation happened correctly Assuming that it did happen correctly that you might want to just continue So all the interactions of like initiate this plan Okay, pause. Let me let me check it out. Okay, that looks good or hey roll that back That's all baked into the SDK as well. That's included That's not something we talked about on slide, but it's one of the other benefits that happens Let's switch back out and you can see now that not only do we have a Scheduler and it might even be hard to figure out with what if I click here you can see we have a schedule of this running You can see here that we have a World three of them and we have three. Hello's that are running. These are all tasks that were asked to run if we switch back To here you can see this is the yam This is like the simplest example of just using yaml to create an application, of course if you look at the Examples over here we have frameworks the one we're looking at this whole the world But the other examples would be HDFS and Kafka you may not be able to see it But Kafka is the other example those are going to be more Java intense and provide a lot more detail but if you're looking for just a quick get the world running example this hello world example the thing that might be a little bit off is like we have these Mustache variables that are going to be replaced in the universe packaging and so if we look at the installation of that application I just did a quick DCOS package install of hello world. I could have also added some some variables I could have said yeah, we want two of those instead of three of those as in the example That was shown to us and then the thing that's actually executing is we're just like executing some value going into the whole point Is using the persistent volume and then providing some sleep fairly simple idea And the world is similar in nature And then we have some configuration option for the duration of sleep So that's the application that's actually running. We have three of them. Why don't we go out here and make a change to this? Let's edit this Let's see here. It's been too long since I've done this configuration And it I will we could edit here. That's not exactly what we want to do Let's I lost my train of thought here Let's let's go through the process of what this would look like if we really wanted to change something Let's go to the universe where this package was installed If we want to stall it again I mean it comes back. I'm really taxing my computer at this point But we're old example not from the command line option, but from the UI We can see here going to advanced options. This is the sleep duration variable that will be replaced This is the whole low and the world this correlates to what we see here We have a hello and we have a world you can see here that we have a count for world We switch back here in the world. We actually have a count We look at the installation that's currently running. You'll see that there's three of them running We could have easily said I really only want one of those things right now Right, I have I've created an environment where I've defined what needs to run How it will land by the fact that we'll actually ask for a Persistent volume it'll land on that this volume if this process were to fail It will be restarted on that same node because it has a persistent volume there And we can make configuration changes as you can see here all all detailed in the whole DC us environment So actually the nice thing about this SDK is it's like we're not just giving it out But we actually heavily use it internally So we actually use it to write all those basically the smack stack we saw before With the services so that was also one of the main motivations because we really needed it to not duplicate all the code between all those frameworks to have like a common infrastructure between both of them and Yes, actually also used by a number of partners. So for example Uber is using as a Cassandra work Quite heavily being is actually using Kafka on DCOS and the rising is also like using Cassandra and Kafka So those are actually services which are in production at relatively large scale and which is also like confirmation for us That we are like moving in a good direction with this SDK here Maybe the current caveats Current challenges. So it's still under very active development. So while developing all this Cassandra Kafka services We are also trying to get it's all basically any time we implement something in the Cassandra or Kafka framework We have to decide this is now something Cassandra specific or is it something which you've moved into the SDK Because it's relevant also for other services. And so this is why there's still like quite heavy development going on in the SDK and in my opinion the most needed part is actually like the Developer documentation making it actually easy for other people to use. So if you come to the repo you might see that the documentation needs to keep up with like the state of development and Maybe like the second challenge is The Restriction that it's actually Java based right now So if you're for example a WranglerDB which wrote it in C++ the entire scheduler This might be like a downside for you But on the other hand it actually allows us to move quickly forward and basically reach a state where we can have all this Our the entire SMAC stack basically Developed by this and thereby extract all the best practices once that is done We'll actually will consider other language options as for example like go most likely as like a next step But we first want to understand what it likes the components we all need in this SDK to enable all services on top so That being mentioned like many plan features we want to have in this SDK are still not in there yet Which also goes was like the very active development So you might if you start using it as of today if you start playing this you might miss some features But you can be sure there's like a large roadmap and they'll be most likely be added Otherwise just open a github issue with the features you're missing So what's next if you're actually considering to write your own service just try it out as mentioned It's still like under some flux, but it's really nice to just get it Maybe just deploys us how the world service which is in the examples folder and just get a feeling for it What it means to write services and what it takes to also deploy services on your own DCS cluster and What we also try to achieve with this talk is actually get feedback on whether this is something you can use or whether You're missing features because we use it like for our services we're developing But we actually wanted to be for you guys Developing services and basically get feedback from you. What are you missing for your service? What could we do to make it easier for you to write like a statement service and Yeah, bonus wise if you really like it good to get a repo open issues if you don't like stuff. Otherwise, maybe leave it a star and Give us hints what you actually Neat or what you like about it And yeah, that's basically our presentation and I would say we still have some minutes for questions or Yeah, this is the part which we mentioned under the current challenges slide that it's rapidly moving So that will be updated soon that was recently added. So just wait for a while Should be there soon once the mic I was curious, but I want yeah If you see here on the getting quick started and what you've noticed in the demo that I provided there's some Expectations and dependencies that you have get virtual boxing vagrant. That is not necessary for the DCS comments Yeah, DC was comments. By the way, that may have a name change probably more associated with SDK But that is useful in getting a quick start everything that you see me running and doing a demo on now It's actually running locally on my own box. So if that's important to you, which I think it is for many It's kind of nice to not need a whole cluster out there or figure out how to deploy a DCS cluster Or whatever the challenges might be that's mitigated by by leveraging this framework here It obviously is gonna pull down a bunch of images and it might take a bit You're gonna need a some good bandwidth in order to achieve success, but other than that it's useful For the questions. Thank you very much and keep on writing stateful services