 Okay, well, I guess I'll I'll get started Thanks for showing up. I know it's late on Friday and everyone's a little bit burnt out after a long week, but I appreciate a couple of questions for us how many people are actually running OpenStack like have it in production. Okay, good and Monitoring your monitoring your system. Yeah Some tools Nagios Okay, pretty popular Zenos Okay stats D graphite Little bit. Yeah log stash Riemann All right, you've got a full open source monitoring stack going good How many people are running stack-tack? Okay, good awesome. How many people are running salameter? Okay, a few more good and how many people are doing billing against OpenStack, okay Couple good. All right. So the purpose of this talk is to talk a little bit about stack-tack, which is a It started off as a monitoring tool that we developed within rack space my name is sandy waltz by the way, I'm a developer with rack space and to talk a little bit about the salameter project and How we are hoping to take the functionality of stack-tack and move it over into salameter? So just as a bit of background I'll give you a little bit of background on on stack-tack first and then we'll talk about salameter and then we'll talk about the Steps that were we've been taking to to merge the two So stack-tack was introduced around Diablo Which was a long time ago and it came from a point of pain that we had when I was working on the Nova team Trying to debug OpenStack pouring through all the log files, and it was just very problematic So one of the things that we had within Nova was the ability to emit messages or events out of the system so if you want if you want stack-tack you can go to this URL and you can download it and use it and it and It started off like I say as a tool for monitoring and debugging OpenStack and the way it worked is these different systems will as things happen important events that happen inside the system They'll emit these notifications or events onto the into the queuing bus and then other Systems can take it and consume those events. So stack-tack Has a couple of components as a database where it stores all those events. There's a Web application that runs That's the actual stack-tack application and then there's a worker and the worker is this little service that goes out and pulls the events off of the queuing system and Sticks them in the database And it gives you a nice pretty little web interface on it You can see in real time all these events that are coming through and you can everything on there is clickable So if I see a tenant ID, I can click on that and see all the actions by that tenant if I click on a request ID I can see everything for that request A particular event name a host name So it gives you a lot of sort of I don't really know what I'm looking for But I'm going to find through here and you can explore and look into all the gory details of the event and There's a rest interface. So if you want to query it and get those events out You don't have to get into all the gory details of dealing with the queuing system or anything You can just hit it with a rest interface and pull out some of the events Operations people aren't big fans of GUI interfaces So there is also a command line tool that you can basically get all the same operations from there I was going to do a demo of stack tack and stacking stuff, but there is a video available Online so you can just do that. It'll show you how to install it and how to get it going but the real value of of Looking at these events is that you get very rich information About how the system is changing so state transitions that are happening inside Inside of open stack so this this is a create instance operation And we can see that the request comes in to the API and we get an event called compute instance update And then it'll go through the scheduler and there'll be a bunch of decisions made about where we're going to stick this instance And then we get into the actual compute node where we send an update where we go to the building states After it's been scheduling we start it There's a bunch of updates that go through as we Provision up the networking and as we set up the block devices and we do all these other Operations and then at the end we'll get a create end event So now we have the ability to look at it on a per request basis of What's happening in the system as opposed to just the spew of logs that come out these are related here So that was a great debugging tool But once we saw what we had there we realized that we can also start tracking performance and my mandate at the time was Monitor everything right measure everything in the system and see what's going on because we wanted to get faster performance of the system So stack tack since it had all that information Turned into that tool and we were able to get some really cool reports out of this So you can look at build time per flavor by region The AMQP in-flight message latency because we know when the events are leaving one system and going into the other one We can find out how long they're sitting in the queue and find out where things are hanging up there So the you know failure rate by tenant Migration sizes by image type Really rich information that to get that out of a log would be very very Difficult so like I say it was monitoring on a per request basis every time a request comes in to open stack It's tagged immediately with a request ID and we can use that to correlate all the actions inside the system now That's right across all the different nodes so Now we were doing monitoring as well. We have stats de-installed at rack space. We have graphite installed We've got Nagios. We've got you know all these tools all over the place But they weren't able to give us this level of detail, you know tools like that are great For any system, you know and I strongly recommend them But to get into the heart of the system of the application that's running you really have to look at the events And so what we were seeing there was You hit you have events on one side, which are these very rich objects, and then you have samples on the other side and I'll explain a little bit between the difference here So samples are small you get a lot of them, and they're pretty disposable. That's not mission critical So an example of a sample will be CPUs at 70 percent, right? Measure that every 30 seconds or so get a nice graph come out You get a picture of what's happening in your system, but you're not knowing why things are happening. So Samples you sort of give you like I say sample give it give you the what and the when of what's going on in your system Events however are very rich. They give you it's a big payload You you know if there's a cost associated with sending it out because it is big and you got to be careful How many send out because it can clog up your system, but gives you everything that you need the who the what the when the where the why Why did the scheduler make this decision? Why is the host taking so long? Why? Why did it send the request to this node for networking? You know all that detail if we compare CPU at 70 percent to an event like this you can see the difference here I've got all the context that I need around the decisions that are being made What's the tenant ID? What's the reservation ID? Tell me all about how many vpus are going into this instance Very rich data So for what we were trying to do for performance measuring and monitoring we were finding that Events were giving us better results than the samples were getting samples are great If we wanted to find out how many five hundreds on we were getting per hour on the front-end Apis but to find out why we were getting those five hundreds on the front-end apis we needed to dig deeper So we worked with that for a while that was really cool And we get then we realized okay now these events are giving us a lot more information than what we initially thought after we Used for all of all the performance enhancements while we were seeing that For billing operations we could use these events to make sure that the customer wasn't being overcharged or undercharged and We could do this in a way that we could do some double-entry accounting The operations that were happening inside the system So I'll just show you some of the events that that we were seeing here Whenever an instance was being created the start and the end fence posts around those operations Rebills resize rescues deletes all of these are our billable events that happen inside the system And these are the things that we wanted to be able to track as opposed to just looking at the production database at the end of The day the thing we didn't want to do was to have other groups coming in and accessing our production database We didn't want to have to get into database replication and all these other things against the production database a lot of sensitive data in there We don't want people using it the events were able to give us all that information Another very important one is this one at the end here compute instance exists and The exists event is generated within Nova whenever the launch dat Date changes on an instance. So the launch dat date is When the instance is created when there's a resize happens when there's a rebuild that date changes And that's the one that triggers. How much are we going to build for? So whenever there's a state change on that field you get an exist record And if the instances just been running all day and nothing's really changed no one did a snapshot No one did anything fancy with it. You'll get them at the end of the day So if you've got a hundred thousand instances, you'll get a hundred thousand of these exists records at the end of the day saying Here's the instance. It's alive. You can use it You can build for it and here's what the bandwidth was throughout the day Bandwidth is a complicated one that people want to track. So that all comes up in the exist record. And so it's a it's a Juicy little nugget to tap into so all these operations these rebuilds these resizes all these billable events were And then at the end of the auto period for us every day We were getting all that information out of it. The problem we had though was how would you know if an exist record got dropped? so now Do we build for that? Because if we don't see an exist record then probably means the instance isn't around anymore, but an event can get dropped so The first thing that we had to do was in order to use this for billing validation We had to make sure that events were first-class citizens that we couldn't drop those events. It's different than a sample It's not it's not a disposable thing that we can send across UDP The event was something that we've gotten to the queuing system And then we lock down on it make sure that is a very careful handoff from this system to that system So our focus for Havana in stacked hack was to make sure that we had Reliable audited and reconcilable event collection, and we did that a couple of different ways so we had our worker which was collecting all these events out of the queuing system and We were doing all these we had these other database tables in here for things like performance tracking and all the rest But so we could look at Start end times on a per-request basis and that was stored in a separate table inside the database the life cycle of every operation So a create-end operation we had that in a different table. We could look at that So what we started to do was to create tables for the individual individual usages the delete operations and all those exist records that showed up at the end of the day and So now what we had was a system where we could actually do some checks against it So we have a tool called the validator which goes out If the end of the day is at midnight Then we can wait a couple hours in case there's any latent events that are happening inside the system And we can look at the updates in the deletes table and find out what the world should look like From these incremental events that would happen and then compare that to the end of the day events And then we could find out if we dropped an event or if something went missing or whatever And then we could take that and if we did have a discrepancy There's another tool to call a reconciler that can go out and if you give access to the production database It can go out and check what production says and actually reconcile this and say I think it's there But I'm not really sure and then you can look at production say well production says it's there So we're gonna bill for it That's an optional piece You can run that if you want or you can just turn it off And what will happen at the end of it is we'll emit a new event into the system So now stack tackley emit an event into the system that we can use an exist dot verified record And that says we've done all the checks everything looks good or there's an error And we got some really valuable information from that we were able to check things like did we get instance type mismatches The did the tenant ID change? That's a very significant one That's probably some sort of security breach or something or someone just entered something wrong or a customer took over or something Some internal rack space specific options things that are pertinent to our business that we could check as well If the architecture changed if you know so if someone changes the flavor size and it doesn't match with what we're billing against So all these things that we could check that the image size obviously image storage is the thing that we want to bill for So we can check all these things at the end of the day and get a very rich sense of are we are we measuring the right thing? So then stack tack then can be used to push all that stuff downstream into our billing systems And that's another important place for that handoff Because these are still events we want to make sure that those handoffs are very carefully, you know manage and they show up on a silver platter so what we did was We actually publish our events into two different queues So we publish into one called the notifications queue and then we have another one called the monitor queue and stack tack will Consume from the monitor queue and then we have another there's another couple of open source projects And you can you can get these on github Yagi is a tool that will just bulk consume from a queuing system So it'll just grab all events a can and relay them on to somewhere else So if you want to push them to another system you can do that And that's that's what we use we use yagi to consume from that queue and then pass them on to atom hopper Which is a pub sub system So now we take all those events and we turn them into RSS and atom feeds so that other systems downstream can consume them And when yagi hands it off to atom hopper and gets back a 200 response then we can call back into stack tack and say yep, we got it We passed it on downstream. Everything is cool. So now we can mark that recorders verified So now later we can we can see if there's any problems here But the mission was get it out of production as quickly as possible store it and then do the analysis on it We did we would want to make sure that if something failed we didn't drop anything So these are the some of the reports these go out every day Senior management gets them everyone gets to see these reports So we want to find out, you know, how many instantaneous events came through throughout the day You know thousands and thousands of these verified events that will happen And these are the error codes that are coming back as we're getting handed off to atom hopper and downstream systems And if something doesn't match in the verification we get a report here That says the instance type doesn't you know, we thought the instant type was gonna be a six and we got a seven Something's up here now We can give that to the reconciler and let it try and validate it automatically But we turn off the reconciler we want people to manually check that stuff and ensure that everything is you know Why is it a six and not a seven and then we can generate all these You know graphs and reports are bandwidth changes The exists out verified records as they get handed down to the system as they go to atom hopper and the other downstream systems And even the downstream systems from that we have an internal group called a usage mediation system and or a System call that and we can hand it off downstream as well So we can see if there's any drops even down to those systems as well So the interesting thing about this now is we're generating samples from the events So that's a very important thing to to consider is that as we start working on these events And we start getting valuable information out of it They produce samples that again go into the system and become part of the monitoring system as well So we can get the historical trends So that's stack tack in a nutshell. That's that that's the mission that we had with it So let's talk a little bit about salameter so People familiar with a salameter is I didn't know what a salameter was Salameter is a device for measuring the height of a cloud. It's a laser. They shoot up and find out where it's a clever name Between that and cinder. I don't know which is a better name cinder is for block storage cinder block I know salameter is pretty cool for that too. So anyway so Salameter was proposed around the Folsom time frame a little bit later and it started off The first mandate was as a billing solution so at the time we were thinking about performance and Those other sorts of enhancements, so we weren't really thinking about billing at the time Especially in the stack tack world. So we didn't pay a whole lot of attention to it Around grizzly the mission changed to be a monitoring solution. So our ears perked up and we said, okay This is something we have to keep an eye on Around that time that's when the foundation created the incubation process and salameter was incubated around grizzly Havana time frame and When stack tack will start it we didn't have any concept of incubation or anything It was just an external project that you know fed off of the open-stack system so Around the start of Havana. We made the announcement on the mailing list that we were going to port stack tack over to salameter and Which was great for us because We grew very organically and there were things that we wanted to change in the design that we didn't quite get right You know, we solved a lot of problems, but those things we could do better Some of the problems we had we had a limitation of only having one worker being able to consume from the queues because We're dealing with temporal data here and if we started pulling in these events from multiple readers Then they could get out of sync and then we could see a dot end coming in before a dot start and then that would fool us up There was no idempotency in the processing pipeline if the worker went down We would probably lose what our pipeline looked like and that could be fatal The database migrations were horrific. I mean, this is a lot of data and we have to store 90 or 180 days of this data before we can archive it So whenever product management will come around and say hey, you know what wouldn't it be really cool if we could check this Let's do something that does image type against region and we would go okay And we need to update the database table and everything this database is really big So the migrations were just taking a really long time So we wanted to find a better schema and a better system we could use for recording this stuff We also record the entire event the whole JSON blob the whole event blob that we get We store the whole thing in the database So if someone gets an idea down the road, even though we're not using those fields We know yeah, you know what we've got a hundred and eighty days worth of data Let's go back and do an analysis and find out we don't have to wait You know for another 90 days before we can find out if we're getting meaningful data from it so That's really cool in theory But in practice when you had to go back and grab a big JSON blob out of a database and Decode it and then look at it and then throw it away and grab another one. That's really time consuming So when we were backfilling some of our databases these operations were taking You know like an entire day just to backfill for a week of data and that just wasn't feasible So we had to find a better way to do that Also, also our batch window was getting tighter and tighter We're running some big reports against the database and we you know We start them at let's say one o'clock in the morning We've got to have them ready for eight o'clock in the morning and this thing is crunching a lot of a lot of data So We want to make sure we get them in a timely fashion and we could see that the way we were going This wasn't going to scale out as well if you want and we're we run stack-tack in production against all of our regions in all of ourselves And we gather a lot of data from it So we want to find a better way to do that And then all the massaging of the events these events come in and we had a lot of stuff in code That said okay, this is a dot create event. Let's pull out these fields and this is a dot end event Let's pull out these fields and there had to be a better way to do it We wanted to make it have it more data driven and not repeat our definitions of the events across different systems And we didn't use Oslo Oslo is the common library if you're a developer Oslo is the common library used by a lot of the different open stack projects and we wanted to be able to make use of that So we were a better behave citizen So yeah, we had we had some some things we wanted to change the problem though We had a salamander's sample based so salamander thinks of problems like CPU is at 70% And it didn't have this concept of what an event was and and how rich that data is and how can we query that very efficiently so So it's a it's a big change and it's a different way of thinking fortunately The salamander team was very cool with you know being receptive to it and yeah Let's let's figure out how we could make this work So we did get some good stuff done in Havana not as much as we wanted, but we did make a lot of progress a lot of the work that we did was back in the Oslo and at the RPC Notification layer, so we added a bunch of stuff in there for Acknowledgement and req semantics on the queuing system So now if an event came through and something failed we could push it back into the queue and we wouldn't drop those events That's all in open stack Oslo now Things about how we store those events. We've been working on a lot of different Schemas for how to store that in a sequel based database and in a no sequel based database So we've been messing around with that so we keep our Events like the trim down version of the event the event that we the pieces of it that we really need in a separate Table that we can access very quickly and we can you know very highly indexed and we can access it Without all the pains that we used to have in stack tack But then also we have a different mechanism for how we store the entire message body We don't have a json blob anymore We have that something that we can when the time comes and someone says hey What if we did image type then we can actually go through and just grab all the instance types very quickly or the image types And just see how that works So a lot of these are branches that are still in the pipeline We're trying to get them approved But the code is all there millise millisecond timing resolution on the storage systems the salameter had second base data and We obviously need to hire the Translation of the notification into of an event is all data driven now so anyone can do it just by changing the yaml file I'll get I'll get into that a little bit more in a second And then we have this entire system called a trigger pipeline for Tracking all these events as they come through and creating a pipeline of ordered events that are related So we didn't have to do these fancy database queries at the end and I'll give you some examples of that So if you've used reman, I think there was a talk earlier about reman IO and how you can use that for monitoring reman's a really cool tool and We modeled this basically on reman, but it solves a lot of the problems that we had with looking at reman About how do you how do you share it across multiple nodes? How do you have a stream that can exist in multiple places? remans all yeah Reman is all in memory. So if the system went down, you'd lose a stream So we wanted to make it persistent and so a lot of goodies in there So that's what we got done in Havana and in ice house. We want to finish the job We want to get all the stack tack functionality over so we can start to Twilight one and bring on the other one. So the place it starts is with that mapping of the notification into the event Remember I showed you that event earlier. This is a you know a big monster several K long Chunk of data that comes in we don't need all that stuff. We just need like the stuff in yellow So we might need instance type or the state and the tenant ID And we want another way that we can pull that stuff out without having to change code every time So one of the pieces that was added is this grammar It's a it's yaml based where we can say things like when the event type comes in that has a start of compute.instance.whatever Then pull out all these traits and the way we store an event now instead of one big Denormalized table we have events and traits. So a trait is like a key value pair related to the event and now programmatically or you know through a configuration file we can Define what are the fields that we want to pull out of it and you can even have plugins in there If there's some fancy fields or a bit coded field that you need to get and stuff You can add code to it as well, and then it also supports things like inheritance So down below we have some very special event types that look like the other ones and you can see it sort of Inherits everything from the instance traits up above But for an instance exists or an instance update. We also have these other fields audit Audit period beginning audit period ending that we want to retrieve as well We don't have to copy paste everything and duplicate it. So it's a nice rich grammar that We should be able to define all all the events in the system Not just for Nova. We want to do it for you know cinder. We want to do it for quantum or Neutron, I'm sorry And all the other systems as well. So So now we've got an event. We've got something that's highly indexed highly available in the database and persisted So we want to hand that off to that routing system that that industrial strength reamon processor that I talked about briefly so the the event manager as The collector is pulling in these events what we would call a worker in stack tack Solometer calls a collector Then the event manager will look at it and it'll pull out the fields that it thinks are important and it can create these sort of virtual Collections of events and it can do that across multiple collectors In a consistent fashion so we can do things like let's create a pipeline for every unique request ID that comes in So now as soon as it hits the API to the last Dot end event that we get we'll just have one sequence of events that are all related So now I don't have to go back to the database and run these fancy queries I just here it is and it hands it off to us and it's ready to go If I wanted to look at a particular instance ID if I want to have a different stream for every instance ID of the system That went through if I wanted to look at all the related events on a particular server or by a different tenant I can set up these things and have it watch them and create these streams for me So that's very powerful and the way we do that again is another yaml grammar So we can this would be a very simple one. This is grab all unique requests and it says okay let's match everything and Distinguish them by the request ID and Expire us an hour after the last event that we've seen So a request ID will come in gets tagged on at the API and it goes all the way through And we don't really know what the last event is it could be a resize operation. It could be a delete Rather than have to hard code all those things If you did hard code them, it would be great because then you wouldn't have to wait an hour And you could just trigger the pipeline right away But if you don't really know you can make something generic like that and say well You know what I haven't seen anything in an hour. That's probably the end of it Let's see what's in there and then this will get passed off for processing Or you can get fancy you can do something like this like this This one here is for tracking all the exist records that come in at the end of the day So what we'll do here is we'll match all the compute instance events that have a time stamp within the beginning of day and end of day from the from the first event beginning of date to end of day and We'll also look for the exist records that have an audit period field in this range We'll distinguish it by instance ID. So now we're gonna get a pipeline for all related instance IDs within that range We're gonna delay it by about an hour So just in case there's some jitter in the system Let's say a collector is slow and the events are coming in out of order We can wait for a little bit and have them reorder automatically across all the different collectors The firing criteria for it is when we see that exists record come in so once that happens We're gonna we're gonna tentatively trigger and then we're gonna wait about an hour just to settle it out and then we're gonna fire it off and If there's something else related to it we can go back and we can also pull in events From elsewhere in the system so we can have a load criteria that says now go back and pull in the exists record from the previous day as well and What we get again is that temporally ordered set of events? On an individual basis, so you'll get hundreds of thousands of these firing through the system that you can spread out across Across your infrastructure, and it'll look kind of like this. Let's say it's a resize operation that that happened throughout the day That line would be the end of the previous audit period We'll get all the events in that down to the exists record And then we'll also get the old verified record from the previous day and we can pass that through a set of Transformation pipelines that can look at it and do all that analysis and do those you know Verifications and things that I showed you earlier in those reports that we hard-coded We had a hard-coded pipeline for it now We can do it in a plug-in fashion where you can just drop in these little widgets and have it look at it and You know do whatever you need for your business. We'll have a whole bunch of it out of the box Anyway, and at the end of it you can generate new events So you would store that stuff or you would issue new events that will go back into the system And they would create new pipelines and they would you know the whole system is the snake eating its tail But it can also generate raw notifications if you want to pass those to other systems or you can generate samples You want to send this out to stats D or you want to have it so that he can look at it? All that stuff can go into the system and and salamander can consume it and make it available to other systems so we think that that's gonna solve a lot of our problems and In terms of database schema rather than generate new tables all the time when everyone do this stuff We just generate new events So since we spent all the time making the event model very rich and and highly indexed and available We don't need to create all these tables anymore I can just create a new and and these events can have a very very rich payload in them So we can put what we need in there So that sounds great, I hope But we still got a lot of other stuff to do The reporting framework is another thing that we need to work on and so we're trying to get some ideas about how to do that We don't want to have that batch window So at you know two o'clock in the morning we run these monster queries trying to generate all this stuff out We want to have reports that build throughout the day the same way that the trigger pipelines work as they're coming through We start to build up these reports and generate little events and then at the end of the day We just assemble them send it up and say there it is We've got to talk to all the other groups neutron Heat I think just added a bunch of notifications But we need to create these trigger pipelines for these other groups as well all that same All those those really fancy reports that we got out of Nova for performance tracking and all the rest of it We want to be able to get for everything for Cinder for You know for for neutron all those there's a lot of knowledge in there that It's going to be interesting to look at it and see what how we can make these correlations between all the different systems And then working with the other groups to actually get more notification supporting their notification support It's a part of Oslo. It's a standard part of OpenStack anyone can tap into it. It's very easy to do It's two lines of code to generate a notification It's different than logging so don't think of it that way You know think of it in terms of some state change That's really important and I want to make sure that that makes it out or an error occurred or warnings happening Or you know something really critical as opposed to a log message because it is structured data so We need to add that more people make light work So please if you have any developers despair We'd love to get some help on it and we think we can push this a lot further There's some really cool stuff that the business groups are thinking about that We'd love to to get in there as well. How do we do zombie and orphan detection against the instances? IPv4 management is a real pain in the butt, you know Everyone's running out of IP addresses. How do we get better faster response on on IP? addresses and allocating those Capacity planning right the finance department is going to be screaming when do we when do we order the new servers when we fire up a rack? So we think that those events are going to give us the tools to get to that place and And again since it's in Solometer These are all things that can be consumed by other systems already This is not like this is a silo that we're building where we want to have everything in one place It's just a distribution system. We take stuff in we do some transformations and then we make it available for other people Solometer has a very rich API on it. We're making it even better So you'll be able to consume this stuff outside and something like Adam Hopper I think will work really well on this as well So we can take these events and make them available through our sS feeds and all you know other Other mechanisms for downstream stuff So you want to help out There's a wiki page. It'll tell you all about it like where we hang out and where we are on IRC and how to get involved and how to make Contributions that's a great place to start but if you have any questions or anything you can just catch any of us and And ask us and and we'll be happy to help That's the QR code for the slides if you if you want all that so questions I love elastic search and this was one of the things that came out of this week was We've been flip-flopping on almost on a weekly basis between my sequel in mongo or Cassandra or whatever And I think the way it's looking now is that we're probably going to do a lot of this Base storage in one of those two systems and then just put elastic search on it for all the other stuff So that's the one you take away experiments that we're gonna have yeah One more here we go for horizon notifications I Never thought about it, but Notifications are good So I don't I don't see why not I mean I think the more and we've got routing systems in most of the services anyway So if someone doesn't want to collect those they can just turn it off So yeah, it'd be interesting to get some use cases around it To find out it might be really good actually for usability testing as well Right. Oh Good. Yeah. Yeah usability. So the question was you know would would events be useful inside of horizon and yeah I think there's the the AB testing side of it and usability stuff could be pretty neat Find out what people are doing especially or to find out who the active tenants are and that sort of stuff that could be Yeah, so so is the question that On the on the MQP side that it could be fragile We run a durable queue ourselves And then we have the secondary system we run to two queues so So Yagi can grab the event first and then we can have the other one It still doesn't help with catastrophic failure, but it is clustered. It is durable. So but I I'd love to get more ideas on that because that And there might be things we can do to make that better if we find that that's a bottleneck Then maybe we'll make that a plug-in system that we can use other mechanisms for it. So but that's the plan right now Yeah, there are blueprints and stuff up on on that pipeline trigger manager stuff So feedback is always good even if you're not a developer it'd be nice You know to read it and try and give some feedback and think about your use cases And will that apply to you and to your group so that could be useful more questions Well, thank you