 So who am I? I'm Steve Wong. I'm an open-source software engineer I work for a team that we call the code team and it's funded by Dell Technologies We're software engineers who are assigned to work on community open-source projects in the past two years I personally have been working a lot on DC OS and Kubernetes Mezos I added the Storage interface for external volume mounts to a patch the Apache Mezos project that would be the dvdi Isolator and I've been active on the Kubernetes storage SIG for about two years My group code also maintains an Apache licensed open-source project called rex ray It's designed to support Docker volume driver mounts on multiple container run times as well as multiple container orchestrators and We're also off active in the container storage interface initiative that is underway What I'm going to talk about today is just a brief intro to the history of the containers and the 12 factors Then moving on I'm going to cover container orchestrators and past platforms platform as a service I'm going to cover orchestrator support for stateful services and then finally I'm going to wrap up with Basically the first three are going to cover the state of running stateful inside containers and I'll wrap up with Call to action on what you can do to get involved with stateful now We'll start with a history lesson on containers Some of these dates vary I went and tried to research this and some of these came out to be plus or minus a year I think the difference is maybe when somebody wrote the first commit in a repository to when it became available to the general public but You know don't throw stones if You know more than I do and I'm off by a year here They should be pretty close and the point here is that Containers aren't really new you can get traces of them all the way back to 1979 and the old AT&T UNIX v7 Containers moved on into the first decade of the year 2000 Google released process containers and these later became packaged into the Linux kernel as control groups or Also known as C groups After these initial containers a movement evolved To support what was called platform as a service and this was a movement that was designed to simplify the effort to deploy build run and manage Applications by giving you a foundation of ready-to-run components so that you didn't worry about networks operating systems middleware servers storage and Backing services such as databases These were fundamentally based on containers whether they you know Drilled down and explained it to you or not and the advantage of these platform as a service Offerings were that you could reduce complexity and thus effort because much of that infrastructure Comes pre-configured and abstracted behind something with automated management The disadvantage of these past platforms is that you have reduced flexibility to support to select the exact tools that you want You know in many cases these platform as a service Offerings are opinionated and they picked the tool the tool solution. So once it's picked you live with it Some of these past platforms also face charges of having single vendor lock-in Finally About a year after this cluster of past platforms come came out This set of patterns and antipanards called the 12 factors was published by and Adam Wiggins He was a co-founder of a Roku which One could say I think is the pioneering past platform Although when I did a Google search it turned out that there may be where as many as ten of these It's just some of them Parished and are no longer with us so I Don't really know what it looked like at the time because I wasn't dealing with past but this is this is what my research told me and that the 12 factors We're designed to abstract out the platform your app ran on and they use containers and they supported Horizontal scale out of apps using container technology What were the 12 factors or what are the 12 factors that the the website hosting this is still alive And well They were a collection of patterns intended really to guide deployment of apps on the Hiroku platform But they applied I would contend equally well on the other past platforms such as cloud foundry Where did this come about In terms of modern containers Docker well, it turns out that the 12 factors were published in 2011 and Docker didn't come out until 2013 it in fact was a spin-off of The 2011 dot cloud you see on that diagram The container orchestrators depends on which one the masos container orchestrator actually predates The 12 factors although it wasn't running at the time on Docker containers it had it had its own masos containerizer Based on Linux containers, and they added support for Docker later Swarm in Kubernetes clearly came after the 12 factors Now what is the difference I mean we've got masos Kubernetes swarm as container orchestrators and These past platforms and really I would use this analogy of Comparing them to pizza as a service you've got there in the two columns Delivery pizza and frozen pizza now if you get the delivery pizza That I would contend is the equivalent of pass where more of the choices are made for you, you know you deal with The vendor takes care of the cheese the toppings the pizza sauce the pizza dough the oven the electric and gas and Shows up at your door, and you're done you don't get to pick the oven that that's taken care of when you go to the Frozen pizza it's using your oven and gas and you maybe have a little more choice if you liked it extra crispy You're in control of your own oven if you wanted to throw a few Extra veggies on top of that frozen pizza have at it now This isn't the whole continuum of this you know to do this right in this talk I'm really talking about the the 12 factors relating to pass and Container orchestrators relating to containers, but the fact is parallel to this you could draw even more cops covering infrastructure as a service virtualization even running on bare metal and the more you know in this continuum horizontally you'd Deal with a less opinionated provider of service all the way down to potentially the equivalent of pizza of growing your own wheat you know threshing it growing your own tomatoes and Really, I would contend in it. You have all of these choices, but the the modern trend Seems to be that in many organizations you're trying to eliminate this complexity which eliminates your operational expense Now it isn't to say you have to choose one or the other there are many organizations where One division might have things going on that makes sense to run on infrastructure as a service another one on Container orchestrator and a third on pass But that's how I would compare these two you're you're basically You're basically in an arena where the past platforms make more choices for you But because of that they remove options from you What are the 12 factors say well the one I'm going to talk about today is factor number six it says Execute the app as one or more stateless platform Processes why would they say that well? It's pretty easy if you if you have a stateless process Whether it's in a container or on a pass in a container. They're easy to replace an upgrade You just shoot them in the head and bring up another one It's easy to automate scale up and scale down because there's nothing you care about inside these things if you've Maintained statelessness now on the other hand I love this I love this quote that somebody tweeted out there and it says there's Stateless as a hoax. I mean if you take the aerial view You can't go out there with a story that literally everything is stateless I mean somewhere if you're gonna build a useful Useful app it can't be something that has Alzheimer's every time the power goes out or you shut it down I mean so that state's got to live somewhere and This person's contention is that there is actually no such thing as a stateless architecture. It it's just Declaring that the state part the difficult part is someone else's problem It doesn't make the problem go away now in this picture if you can see it in the background They used an ostrich with a head in its in the head in the sand I'd suggest that maybe a more appropriate Animal analogy would be to use the elephant, you know that there's still state there on these past platforms Heroku for example or cloud foundry. They don't really eliminate it They just declare that and I've got it here that any data that needs to persist Must be stored in a stateful backing service. Well, that's where your state is You've just declared it out of scope the other guy's problem But the elephant in the room is that it's still there and you're pretending you don't see it What exactly is one of these stateful backing services well in the era when the 12 factors were originally written 2011 it was typically a database and The 12 factors goes on to advise that this database or this stateful backing service should be consumed behind an API such as an HTTP network service now What exactly would that be well if you're running in AWS in the Amazon cloud? It would be something like Amazon's Dynamo DB if you're on-prem it would be something like an Oracle database Something basically maintained by either your service provider or if you're dealing with the Oracle example a team of Oracle DBAs, you know high-priced experts that maintain this stateful backing service What's That changes a little bit in modern times. I would contend that since 2011 Some of these it isn't just databases. It's moved on to various forms of No sequel things like Cassandra, Mongo, Kafka, Redis keeping state other than simply in memory elastic Hadoop and What if you want to run these kinds of stateful services? But you're the guy responsible for your whole IT organization in other words suddenly it isn't the other guy's problem You're the guy room maintaining that stateful backing Service There are perfectly valid reasons why you'd want to maintain your own too Maybe you want to pick the version of your own tool and or database Maybe you want to customize it. Maybe you want to stay portable across clouds in other words I want to put together an app that I could run on Amazon AWS but also run it on the Google cloud and also run it in Certain geographic reason regions in an on-prem data center You know in order to maintain that portability you can't go use something like Dynamo DB you might want to pick an open-source solution and Take management responsibility for it Another good reason for doing this yourself is to main to avoid database monoliths The fact is in a lot of those platform as a service solutions the platform would stand up a database and Every app using it on that platform would be connected to that behind the the API but ultimately that results in a Monolith where all of this information is in in the one big basket. Well, what's the problem with monoliths? Well Monoliths are bad for a number of reasons If you let a database get large, how are you safely going to move it? You know moving it to another location or a different version or a different technology can be as Complex as rebuilding a 40-story high-rise in a dense urban area where you're going to have to orchestrate an Implosion with other buildings filled with people that you care about right next to it The bottom line is in my observation and you know, I work for a technology vendor. That's pretty big in storage My observation is that once a database gets large Sufficiently large nobody ever moves it nobody wants to bet their career on the risk involved with moving that You know, why is Oracle still here? Well, because there's a lot of really big ones and nobody wants to bet their career on taking the risk of moving it I mean once these get large that problem can be not just the risk of collateral damage and the The time it will take and the outage where your services are off the air But really these things can sometimes get so big that the problem is like moving all the water out of the Atlantic Ocean Into the Pacific is just not doable with the technology you're likely to get at hand Get on hand or with the budget that you're likely to have so Just like people advise against monoliths when you're building You know unrelated Apps whether they're related or not, you know the whole principle of separation of concerns I would contend that when it comes to data storage It's a good idea to avoid these monoliths that if two apps are not related They shouldn't be saving their things in the same database that gives you the flexibility to Do things like pick different technologies when they make sense and it keeps them from getting bigger so you can do things like Change versions independently. I mean consider the logistics if you had a hundred different apps all in one Database and then you want to update it You're going to have to have a hundred different teams responsible for those apps coordinate this effort to Assertain whether they're comfortable and compatible with this version change Whereas if you kept these as separate instances of that data store you could do the common things like canary testing where you have one team maybe one that Has a relatively low cost should it fail go out there and do the testing for it first and then and only that Only if you have a satisfactory experience let that ripple through to the rest of your organization You could also do things if you avoid monoliths of having a team with a key value store That discovers a newer open-source key value store that's likely to work better for them have the freedom to upgrade Without impacting others. I mean and let's face it in in this field things change all the time The only constant is that things change so avoiding these monoliths in your data store make a lot of sense so if If we agree that we want to give people the flexibilities offered in containers and container orchestrators things like Dev teams engaging in kind of their own self-service of picking their own tools Taking advantage of container Attributes like consistency no matter where you run, you know looks the same whether you're on the Amazon cloud the Google cloud or on-prem You want to have these things packaged with dependency management? You want to take advantage of an orchestration platform that can do health monitoring automatic rollouts and rollbacks declarative configuration Putting these things in containers if you can do it makes a lot of sense. There's a lot of value here so My wrap-up on the 12 factors is that I'm I'm not I'm not challenging the whole thing. I'm actually in full agreement that with the 12 factor principle That publishing any any data store behind an interface is a good idea having a controlled abstraction layer But I want one that's I want to enable one that's hosted on a container orchestration platform That gets to cheat on that rule that all processes must be stateless, you know, they're they're Maybe in 2011 that wasn't doable, but on a modern container orchestrator platform Stateless running in containers is supported and I'll I'll go on a further slides to back that up if you have doubts but by by accepting the fact that maybe the 12 factors should be a little bit flexible we can preserve its usefulness and I mean there are a lot of documents You know in this technology field that should have a sell-by date of six months or a year The fact is the 12 factors have survived from 2011 to 2017 pretty pretty well and a six-year life in any document in this field is pretty decent I'm not saying tear down the whole thing. I'm just saying We need to treat these 12 factors not like some religious code of sale or treat them like a law book and You'll just you'll make yourself miserable doing that. They they should be respected and considered when it makes sense Now if you're skeptical unfortunately, it looks like that's pretty hard to read But what that is just to point out that? stateful in containers Isn't new and isn't just crazy talk This is a snapshot of Docker hub and it turns out that if you go to Docker hub You can actually sort these by popularity in other words the number of downloads. Well on the day. I got this this is a top 12 list of Images off of Docker hub and it turns out that seven of the top 12 on the day I looked we're stateful applications. So pretty clearly people are doing this This isn't crazy talk If we move on to container orchestrators We've got this is a chart showing the features in an assortment of container orchestrators related to running stateful in Stateful apps inside containers So on the far left you can see DC OS supports external volume External persistent volume mounts. In fact Kubernetes mazos and swarm do as well DC OS and mazos support frameworks in other words This is a two-level scheduler if you're not if you're not familiar with it and they have frameworks that Have been published specifically to support stateful applications being managed on the platform and Scheduled out onto cluster notes DC OS has packages to support stateful apps Running in containers Kubernetes has something pretty similar called helm charts Kubernetes also as operators and stateful sets and I'm going to go into these in detail Now you on some forms of these stateful backing stores It is possible to use local storage for state, but there's a downside to this, you know my What I would compare this to is I'd say that You know you could you could be really crazy and run something like postgres or my sequel Direct attached storage in a cluster node But if you do that and the container is killed or that cluster node catches fire and burns Your data is gone like forever There are other things like Cassandra that do involve some forms of replication So maybe you feel some pain when it goes down, but it it might be recoverable But I would use the analogy to using local storage to be like smoking a cigarette You take that first drag it tastes good at first. Maybe you see other nodes doing it But in the long run it's going to learn lead to shorter data life expectancy and reduce your capacity your lung capacity The the alternative of this is something called external volume mounts where you use some sort of data communication Network communication to attach a storage volume that is provided off your cluster node and These external volume mounts. How do they work? Well, they're If you're a techie geek like me, you're probably familiar with the sci-fi TV series Star Trek and I would say that a persistent volume of Persistent app using an external mount is just like that familiar episode of Star Trek your database binary Can be like that guy in the red shirt in other words the expendable guy who you know if you watched Many episodes isn't going to ever be seen again by the by the by the end of that one-hour episode that red shirt guy is dead The the state full app binary can be the guy in the red shirt So long as you use an external volume mount, which is the guy in the yellow shirt when trouble arises the guy in the yellow shirt picks up the communicator and says beam me up Scotty and He flees the safety the red the red shirt guys are dead But the state the stuff you cared about in your database it got beamed up You can beam it back down to another planet pick some other red shirt guys and you're back in business If you're running things like Postgres my sequel the traditional relational databases that is really how you're going to do it here There might be an alternative with some of them of sharding them, but you know the baseline single node database That's how you do it Cassandra could take the red shirt guy getting knocked off, but in fact In some instances there might be benefits to using an external volume out even if even with cluster aware storage Like Cassandra this all depends and I guess this is getting into a second talk that takes more than an hour But I'd invite you to come and see me if you're curious about that either after the talk or I'm hanging out in a booth We're sponsoring downstairs, but Bottom line is External persistent volume mounts are like the Star Trek it untethers your data from a particular cluster node and lets the container orchestrator do things like upgrade the database binary with a very short-term hit on on your downtime and It allows you to migrate things across these nodes So you could do things like a plan maintenance activity on it on a compute node Moving on to frameworks like I said all of the container orchestrators now support those external volume outs They're they're They've just become universal the frameworks are found in Apache mesos and DCOS now these frameworks can support both stateful and stateless applications, but Stateful app management is a primary use case and mesos frameworks exist for Pretty much all of those Stateful services I showed in that prior one and at the end of my talk I'll have a diagram that you can take with you that actually links to where you can find these The DCOS container orchestrator also has a concept of packages and On the stateful side packages exist for Cassandra elastic HDFS Kafka MongoDB MySQL Postgres Redis and even more and this provides an app store like experience complete with a UI plus a CLI For deploying these stateful apps. So they're basically moving this into the realm of the easy button for deploying stateful Kubernetes like DCOS has helm charts and it's a similar thing An app store experience it supports update and rollback of these stateful apps Helm charts are available for Cassandra elastic HDFS Kafka Mongo MySQL Postgres Redis and more and it's growing Kubernetes also has a Recent edition called stateful sets now it turns out with many of these cluster aware scale-out things like I don't know zookeeper for example They're based on a horizontally scalable set of nodes That have unique network identifiers in other words They know that this guy is node one this other guy is node two and this other guy with this host name is node three The stateful sets take care of separating out the configuration of these things And it and keeping these running Using stateful persistent storage the kubernetes stateful set also manages and this is very important in some of these cluster aware replicating stateful Solutions it supports ordered start-up and shutdown Including a graceful shutdown where you might be able to drain transactions in progress Rather than have this be the equivalent of randomly pulling the plug on a bunch of servers some of these can recover in the unordered Scenario, but it just takes a lot longer and if you can have this scenario where you can shut them down gracefully If you're engaging in a plan maintenance activity and upgrade something like that. It's just a much more pleasant reduced downtime experience Finally kubernetes supports something called operators and an operator We're getting into kubernetes design here, but I'm going to go for it an operator is based on the kubernetes controller concept And in kubernetes a controller is something like the you know the thermostat that controls your furnace a rare conditioner where you start by Setting or declaring a desired state say 72 degrees Fahrenheit and the controllers this continuous process that Looks at what your desired state has been declared to be goes out and takes a measurement of the current state and does what is necessary to maintain that condition so This these controllers can take a declaration that I want X number of nodes of my horizontally scaled out Stateful solution and simply make it happen health monitoring the health monitor this well it runs and Engage in self-healing when things should things go wrong and Operators free to be layered upon other kubernetes concepts like stateful sets external persistent volume mounts etc And they often are But the bottom line is that kubernetes has an inventory that is out there now and growing of operators specifically built to support stateful apps If you want a demo of this, I've got two suggestions here, so tomorrow at 1055 there's a session which is a tutorial of running stateful applications on kubernetes and During this tutorial, which is going to involve Sodaly of Google and Chris Duchain of the code group that I work in You're going to see a demo Deploying stateful on kubernetes and it's an end-tier app using stateful and they're going to demonstrate first deploying it to the Google public cloud and then deploying the same app unchanged to an on-prem hardware scenario and There's that there's actually user participation in this so bring your laptop Finally if you're sticking around for mazos con there is a workshop on building your first stateful service on DCOS so if this is a subject that interests you and you're potentially interested in either of these Container orchestrators. I'd recommend these sessions Getting back to the 12 factors I'm not the first one to recognize the fact that as Good as the 12 factors was it perhaps is in need of some embellishment It turns out that there's an O'Reilly book That is excellent in fact some of the ideas for my presentation Came from this book as well as a presentation that I sat in at a meet-up by Randall Schwartz who is the emcee of the Foss weekly show and At the end of my deck here I'll show a link to that will allow you to you'll have to give up your email but you can download a free copy a PDF of this O'Reilly book and It basically I'm making the point that the The specific 12 factor line item that relates to stateful Maybe needs an update but this book goes into more detail on other things of the 12 factors and if you're deploying things on either a pass or a Container orchestrator system. I recommend this book Kylie So is stateful perfect on these container orchestrators no I'm gonna give you a warning up front that This is an active area, but there are there are parts of this story of stateful on In containers on container orchestrators that are ripe for improvement The first one is backup, you know to do backup right you need solutions like Kessing of these applications and I mean the applications themselves things like postgres typically have a CLI you can call to drain Transaction in progress and get them to flush Caches in memory down to the storage level so that the storage can be utilized in a backup But what you typically want to do in there is trigger a snapshot I mean in the old days in virtualization That storage was evolved to the point where with popular storage solutions They had built-in snapshot solutions where you just call an API saying snapshot this volume like that The snapshot is retained maybe in a copy on a right scenario or something but then the database can go back immediately into production and the the outage Related to a backup was typically minuscule and perhaps undetectable, you know that that snapshot made that instantaneous I am working in With storage sags in both mezos and kubernetes and I can tell you that These groups are working on support for snapshot now, but it isn't there today. So it's something That's something that's still in the air The second item is that Storage plug-in drivers are not standardized against across these platforms now if you're a big organization that maybe is Running multiple container orchestrators and they're non-uniform There are people out there who run one department in mezos in a different one in kubernetes Or one on a past platform like cloud foundry and another on a patching mezos It'd be a great thing if this interface to the backing External volumes that you keep to store the persistent information was standardized, but they're not There is an effort underway called the container storage interface that is a group of both Orchestrator suppliers like kubernetes mezos docker cloud foundry and storage providers like del net app A long list who is working on this, but it isn't out there yet. So we're we're having meetings We're designing it my code group actually had the first release today of A provider for this container storage interface, but it's written Against what is really a draft standard at this point? It hasn't been universally agreed to by this group But we felt we had to go first. So we had a press release today announcing the fact that we came out there with a Call it a validation suite for this principle of the container storage interface And I would compare this contort container storage interface to the efforts like the container network interface that That have evolved in the container space to make things portable and that work is underway There are also some rough edges related to replication volume resizing etc. Things that you've typically had available For years in the virtualization space We're still working on those on containers now Don't let that scare you You know I went there and told you the honest truth that some of these things aren't really comparable to what you might find With virtualization, maybe with some past platforms that run on top of virtualization But I would contend that even though there are still some rough edges if you're somebody who can make serious use of this The time to get involved is not to wait till it's done, but get involved now You know the whole model of open source isn't like commercial software I used to work in commercial software and the way that worked is a product would typically have a Product manager or guy a guy with that product manager title who would try to deduce a feature list Maybe he talks to the biggest customers kind of ask them what he wants goes off and builds things And then comes down from the mountain with the product already carved out If that isn't to your liking or if you want features later, it's typically You know a long cycle to get what you really want in if you get involved in an open source project like this today Well, it's still forming You're going to be in a position to actually contribute and basically get what you want If you get a seat at that table and participate as an end user to represent your use cases And I'm kind of sitting on the other side building, you know the implementation of this But I can tell you from the meetings that we've got and things like the kubernetes Storage saying that we'd love to have more user involvement Representing your use cases so that we can build the things that people are actually likely to use The other benefit you get of participating on that level before it's fully bank Is that the bottom line is here that if you're really a big organization at scale? Let's face it you're going to need to have your staff or at least some members of your staff Trained in this right to be able to troubleshoot this when things go wrong. Nothing is ever flawless Well, I'll tell you the one of the best ways to get training would be to get there with a seat at the table As this thing is being architected in design because your anybody who does that is truly going to understand this from top to bottom And if you do run into bugs, you're likely to build up contacts with the actual developers You know if some of these meetings are face-to-faces some of them are on Google Hangouts or Zoom video calls But you would get to know the actual developers and I suspect that the net result of that is that if your organization uses this in mission-critical things that you'd have You'd have a Contact list to where you could call these people you'd know who they were and and they'd return your call So I'd invite you to get involved if this is something that your organization is Potentially Could could benefit up from and I'd invite you to get involved like now before it's done I Want to get back, you know to I'm going to suggest to you that The gestation of containers, you know, I showed you I opened this talk with kind of the history of containers showing that containers go all the way back to 1979 and They they took a while to actually get adopted. I I think that Containers really didn't hit the mainstream until docker came out myself And I would compare this to the adoption of the automobile. It turns out that the first gasoline engine Thing like an automobile. It was an engine attached to the push cart came out in 1870 it wasn't till 1885 that Carl Bands made the first gas automobile for sale that guy in 1870 just made that push cart for himself and this was handmade, but you could order it It was hand produced finally in 1903 Henry Ford Put together a factory to mass produce the Model T Now Let's see the effects of that in 1900 before the Model T This is New York City Fifth Avenue. Believe it or not. Those are all horses, but somewhere in there In fact, it's right here. There's one automobile So that's that's that's 1900 New York City Just 13 years later There's one horse in that picture and it's all gone automobile. So a span of 13 years You know I would contend that This span Took 34 years from that guy first Building the hand push cart to when it got popularized and this is very comparable to the development of CH route in 1979 taking all the way till about now for containers to get mainstream but when things move they actually move pretty darn quickly and This field of containers is moving quickly and stateful in containers is moving quickly as well so I Think that this We're only a few years out before containers are universal for both stateless and stateful and once again to my point This this is why I think you should get involved now Even if it doesn't quite have things like the volume snapshots yet That's the end of my talk I'm going to just show you a few things here because I'm going to leave you with a link to this deck But this is the uber chart Kind of I raced through this but this shows you the support for all the stateful apps on the various container Arcus Raiders and if you get this deck, this is actually a hyperlink that actually takes you to the Repository and the documentation for what's going on there so you can learn how to deploy these on the various container orchestrators If you want to take a picture of this one, I've already published this deck Up on slide share so this will get you to this deck That top link is the free O'Reilly book The 12 factors revisited I'll just leave that up for another second so you can get pictures of this And I I did release this to the Linux foundation who should publish it in my experience They'll get it in a couple weeks But they generally publish them as PDFs and I can't promise you that it will have working hyperlinks So going here might be better Finally should you want to contact me you can get me on Twitter at that handle I Unfortunately see now that these maybe aren't super readable, but this is the group I work for we've got a booth Down in the expo hall so come and see us That said I guess I have room for some questions here if anybody's got any yeah Yeah, right. Oh, that's what they do they Well, I Okay He's asking if people are actually using kubernetes stateful sets to run stateful apps in production Well, I I think that there are people running in in production, but there's varying levels of tolerance Right. I mean some people have more tolerance for failure than others. I You know It's so new that in some fields like financial transactions My gut feel is that those guys tend to be the last adopters of technology and some others are Much more aggressive At each other Yeah, not really a technology now on some of these I think that it almost is required that they run in containers So if you look at something like a big data fast data solution like a Kafka I think the reference platform for that is in fact running it on Apache Mezos or DC OS that You'd be really out of the mainstream if you were to stand those things out off of a platform like that. So I Think a lot of this depends specifically on what the stateful app is and A lot of that is your tolerance for pain, you know, the stuff isn't perfect yet Some of these stateful solutions actually are cluster aware So some of these could even be used without the external volume mounts using Das But you would you know the downside of that is that if you get sufficiently big like let's say you scale out horizontally to a Thousand nodes once you get to that size. My contention is you've almost you've always got some nodes in failure Right. I mean the the probabilities just creep up the bigger the size gets and the logistics of that Yeah So well, it isn't perfect, let's put it that way and but There are early adopters who are already going there and a lot of it depends on what your stateful app is and a lot of it Like I say depends on your tolerance for pain What it will cost you if there's a glitch if I have to do a recovery from a backup if things go wrong There's no universal. Yes. No Even in virtualization, which has been here. I don't know how many decades I think if somebody told you it would never fail ever they're a liar I mean this this stuff maybe isn't as good as some of those other solutions, but it has other benefits I mean if you really want portability across clouds You know, I don't know if you've ever tried to take a virtual machine running on-prem in one hypervisor and then move it in into the Amazon cloud and then move it back because My own experience with that is a conclusion that they're really not portable at all So, you know, this maybe isn't as good as some solutions at some things But it's way better at others that and you have to decide on your own which of these attributes is important to you and I can see with all of the R&D effort and human manpower being invested in these open-source projects. I mean all of those things on that chart have Dozens if not hundreds of people behind them and there's a lot of effort going into this stuff to To build it up and the world is going there Well, I think that depends on the app, too I mean there are some of these stateful apps people build the stateful app for something like I Know there's a guy Josh Berkes who's like a guru of Postgres who's speaking here and he's the If you're curious about running Postgres, it's what would be a sharded Postgres database server on stateful set He's the guy writing it. So I think I'd suggest that maybe You have to ask that question in the context of the specific stateful app You're trying to run because the answer is different for all of them and some of them are in a more advanced condition than others Oh, I even I just noticed sod is here. So he's somewhat of an authority on Kubernetes, too Do you have anything that I hate to put the spot but maybe you can contribute to my answer there We're very new Kubernetes has been around for two years that said there are fairly large deployments out there It's a lot like what Steve said depends on what your tolerance for failure is if you need to be very very very highly available Trying brand new cutting edge technology probably isn't But at this point, I think at least we're in the Kubernetes side of things. We're fairly stable It's far better than it was when we started about a year and a half ago. So people are trying it out. We do have customers Deploying stateful applications using Kubernetes So it's definitely possible. Thank you anybody else. Okay. Well, thank you for coming