 Okay, go ahead and get started Hello stackers future stackers. I'm Mike Wilson June Park. I'm gonna let June sit down now. Okay, you go ahead He just got done taking a nap. So I don't you know, I don't want him to take another one during the presentation So let me just start out Until you guys that We're a we're a new company on the scene. We're from Bluehost Owned by endurance international group a couple of me may have heard of them We say that we're the largest shared hosting company in the United States Just a I'd say six eight months ago. We became larger than Go Daddy as far as we know as far as that's published So we're kind of a big company we became very interested in In going bigger Again about six eight months ago, but we love open stack. We've been using it All this time. It's gotten us a great solution To the problems that we had and we love we love the community. We love the software. We love the idea We're totally on board with that. So first of all, I just want to express appreciation to the community that takes care of this Thank you So just how we're gonna do this presentation, we're gonna try and say everything we have to say in about 30 minutes We're gonna leave 10 minutes of fudge time for questions Also have a question and answer session After the presentation is over if you guys want to ask questions during the presentation We'll answer them as long as they're short. Otherwise. We'll just say I'll talk about that later offline or whatever Just want to start out with a poll how many of you guys are running open stack, okay? And the rest of you want to be I know that How many of you guys are running a multi-node installation? Okay, you guys are running with a hundred hyper hypervisors 500 thousand 10,000 you don't count you work for us Okay, so we're right about there. We're at a little over 10,000 nodes We're about 16,000 somewhere around there today, and we're kind of crazy So we're doing it in in Folsom and we're doing without cells. We didn't know you couldn't do that. So we just did it So I'm gonna tell you our story so Again eight months ago my boss came and talked to me and he said so Mike something big is going down we're making an acquisition and We're gonna go from being a shared hosting company that being more of a platform company for all of EIG's brands So right now you have 2,000 servers We managed them pretty well In about six months, you're gonna have 20,000 servers We're gonna have to change the way we do things All right, we want to be able to take this hardware out into the data center plug it in not mess with it It needs to provision itself. It needs to come out needs to be ready to use this all needs to be automated We have to be able to plan to scale the multiple data centers And by the way, you've got about two months to come up with a solution and I can't tell you specifics because it's all secret We're not done acquiring them yet. So, ah, that was kind of stressful. I Started working Brought out some high-level requirements what I needed was a centralized management. I needed horizontal scalability I need abstractions for physical resources for logical deployments devices, etc I really wanted an open-source project that we could grow along with or that we could help contribute to That had a strong vibrant community Really what we were looking at or a lot of the very attractive features of cloud migration, imaging, provisioning We really wanted all that stuff and we wanted to have a Public cloud offering in our future. That's definitely in our future. So instead of buying something or writing something Or coming up with something on our own that we'd have to throw out We said we really want to be on a cloud platform Looked around open stack looked like the best structured Being that looked like it was going to survive looked like it was going to have good scaling potential and good growth in the future So that's what we chose So again our environment today I Wish I would have gotten the exact numbers, but I didn't I didn't make the call But we're somewhere around 16,000 17,000 somewhere in between there. We're literally provisioning bringing up hundreds of new hardware nodes per day Our use case is different. We're not really a public cloud. We're the only consumers of our cloud We have a couple of tenants. They're mostly controlled by us But our customers on these block on these VMs. They have a public network. They're directly attached We give them that IP. They stay with that IP We plan on adding private networking later, but but we're we're kind of a I Would say a Franken cloud or something like that We're we use it a little differently than most people do So what I want to talk about I want to talk about The issues that we've had with stability Excuse me scalability and stability Some things that we've had to do to kind of rethink the way the current over the the fulsome quantum model works Which as I understand you is fairly similar to what grizzly has We want to talk about some of our operational issues and some of our Conclusions and solutions that we came up with Let's dive right in So why is it difficult to scale for us? What we found so far we we encountered bugs we had learning experiences, you know, we had a lot of Just deployment problems that normal people would have when you're using new software That that's not a big deal. We expected that What we didn't expect what we were new to was this messaging system here This is super integral to open stack, you know, anytime you do anything Provision an instance reboot an instance ask for a VNC console. Whatever you're doing. That's a message down the message bus So this was new for us. We hoped it would just work. Oh, we were so naive So we've encountered some problems with that. Well, we'll get into it We also encountered some problems with our mysql server Since we run everything in a single cell we kind of run it against a single back end We soon ran into scaling issues with that. We also ran some heavy APIs they were They didn't really show any problems at a low scale But once we scaled up higher we started seeing like all this traffic and we're like, oh, no We got it. We got to take care of this. What are we gonna do? So we actually patched up some code and did some different things to take care of that Well, we'll talk about that later, too I Think this is no surprise to anyone who knows who's in the community, but it's it's it's hard to diagnose It's hard to predict right now how open stack is gonna act For example, we don't we don't have a simulator a simulator an emulator for high-scalability testing like some other platforms have We don't really have much of a guide as to how to scale up for how things are going to behave Nobody's been there yet We don't Sometimes our error messages You know unless you're a trained operator and you're used to them you don't know what they mean They're not very verbose. Sometimes they're a stack trace Which really it requires that you have a detailed knowledge of the code base So I I'm a I'm a pearl guy It was interesting getting to Python. I really like it, but I had to do that very quickly So some of the things that we've improved in Nova We we had to add monitoring and some different troubleshooting things So those things that were helpful for us. We added a service ping API And this was just an extension that we made to Nova just sends a go ahead Yeah, we we hope so We're gonna give you all our code at the end of the presentation. So you can laugh at us or whatever Yeah, we would love it to be upstream. Yeah Anyway, we added this the service ping is a super helpful for us As we have services drop out all the time So we kind of we verify before we do anything the service really is up Really because we can send messages to it. We added some additional task states We like to know when glance is downloading an image Instead of just networking, you know, we want to know other things about networking So we added some of that a little more granularity so that we could know what was going on Reported some more verbose human readable errors to the instance faults stuff We use LVM as a back end. So we had to add a lot of LVM functionality again, we might have done it wrong but We added a lot of our own code to make it work the way that we thought it should Also this stop thing we didn't like ripping out the power cord We wanted a nice gentle way to stop people down to added some of that As far as stability goes, I think this guy actually I think I got fixed by this But we have some bugs in the OBS vif driver that we ran into very early We also had problems scaling the scheduler when you get a ton of nodes and you're reporting state on them all the time and It kind of bogs down the scheduler becomes hard to use So really what we did is we we kind of ripped it out. We wrote our own scheduler. It's a very simple scheduler It's geared towards our use case Anyone else who saw it would go But it works really well for us So I want to talk about our mysql Performance problems. By the way, did I miss any hands up? I'm kind of Okay, I want to talk about our mysql performance problems So you guys might know that in Folsom all of our services have a direct connection to the database And they pushed a lot of queries against the database too many in our opinion We did notice that it was way more reads than it was writes, but we still have a fair amount of writes Sometimes we have queries that return a huge number of results and where we're very large, you know a couple tenants in our cloud Again, that's where we're using OpenStack the way that no one intended and we sometimes would get almost a table scan with some of our queries so and there also is this periodic tasks and This could just be my imagination, but it kind of seems like these things all kind of gravitate together Even though we have the random skewing we would definitely have a periodicity of load on our database. It was Kill the database back off kill the database back off and it wasn't the periodic task interval. It was at different intervals So what happens in mysql when you do this? Well, you have a max connections that you specify in mysql And then since we use in ODB get all the nice constraints and row locking We have in ODB has a setting for thread concurrency thread concurrency to me Basically means the number of threads that can be that can be executing queries at any one time against the in ODB engine We also have this other thing here. I don't know if you guys have heard of this. We hadn't in ODB concurrency tickets Basically what this means is when you get into that concurrency queue and you are executing You get into the queue. They give you 500 tickets say hey these tickets are all redeemable for a single row So when you have this huge Result and you have oh, let's say Ten of those queries in your execution queue and they're returning 5,000 rows each They're gonna go through the queue ten different times and it kind of causes this death spiral situation You're never ever going to be able to service all the queries So we had to tune that a little bit Definitely we could not run with default settings So just just increase concurrency and tickets right that's the answer No, that that's not quite the answer So it's definitely a must and we've gotten really far with that But we don't know why we don't know Like maybe it's undocumented. Maybe it's a bug But we still have things Recuing going back into the queue even though we've given them an absurd amount of tickets to return results with They still recue. We still get into the death spiral. So we don't we don't really have a solution We have a workaround what we've done is we've created a a read-only DB handle and I kind of combed through the code went through the whole code base the part that mattered to us and I looked at all The DB queries and I went okay. These queries here are not sensitive to slave lag These I'm all I'm just gonna send them to the slave cluster. I don't care about them hitting my right master And then even a few queries that I didn't know if they were sent. So that's a really painful query I need to send that to the recluster and we're just gonna have to deal with that on our application end So that has really helped us scale once we got that into place I Feel like we can scale pretty far with that. It was a fairly good solution even though it's a little ad hoc Our messaging system do you guys do you guys ever watch Indiana Jones Raiders of the Lost Ark remember this guy? So Lasker stated. I'm sorry So the the bad evil guy comes in and he's gonna get the chalice and he looks at all of them And the guy says you know you need to choose wisely one of them You know the good chalice is gonna be eternal life the other one not so much So he chooses the shiniest best-looking most bejeweled goblet Drinks it then he turns into dust he dies. It's horrible and at the end this guy says he chose poorly So we chose Cupid We chose poorly I think We did it because it looked very shiny on the outside when we researched it It looked like it was fast the benchmarks all looked really good It looked like it was easier to set up. It looked like it was Had a really nice clustering mode And I'm not I'm not bashing on Cupid it could be that we've configured it wrong It could be just that we're using the wrong release, but it performs very badly for us very unstable We lose messages all the time we use we lose subscriptions to topics Kind of working through these bugs we've we found problems in the server we found problems in the client code we found Unhandled exceptions in consistent states in the Folsom Oslo code But really the conclusion that we came to is that at least in our open-stack deployment The broker is an unnecessary bottleneck the features that it provides us the capabilities that provides us. We don't really need We don't care So we like MQP We like that abstraction, but we really think the broker is too complex Our recommendation if you really want to scale far We got to figure out a way to rip out that broker to figure out you know rip out that hub and spoke bottleneck So right now we're researching zero MQ is a replacement. We're excited about it We've only been doing it for about two weeks. So we're not we're not completely sure what we're gonna do But that's we're pinning a lot of hopes on that otherwise we have to do nasty workarounds so this is a kind of how we plan on scaling probably Up to a hundred thousand or so nodes or until we can get to grizzly and start using sales So we have group controllers here. These guys are all load balanced We're running our core controller services the API the scheduler quantum server keystone Some of you might think I'm crazy here But this does work for us because of the application that we have calling our cloud API as we know it does locking It takes care of things that makes sure that there are no races. So this is safe for us We also run a cluster of cupid servers. This is not for capacity. It actually slows things down But when one of them crashes as they often do we don't lose service We can hurry up and bring it back up get it re-sync get it back into the cluster without a loss of service for our customers We have a single my sequel master that is really beefy We should have made that guy with strong arms and a lot bigger and we have a big cluster of read only my sequel slaves These are all load balanced here load balance here here and then of course we have our server farm It's very large. We have our modified heavily modified OBS quantum plug-in running there and Somewhat modified Nova compute running. He's having any questions about this Is it 64 cores 64 gigabytes of RAM I believe runs on SSDs It's like we're like Finding whatever the best thing we had in the data center Put it there So we're Libert KVM anything else Sure Storage back in we use local disk for our dedicated products And we use kind of a proprietary storage system that goes over ice guzzy as a transport for our VPS stuff I mean, we are not provisioning shared yet but dedicated to VPS and And VPS by the way only in kind of beta alpha status. That's not yet a public offering I'm gonna go ahead and pass the mic over to June hold on June. I'm gonna introduce you. I'm gonna give you some accolades So you guys saw the picture of me pulling out my curly hair We went and found June. It was really fortunate find June has built This will be his third cloud in his career. He's built a cloud at NTT He built another cloud of vario, which got bought by NTT. It's kind of weird so June came to us with a lot of experience a lot of Perspective and what he'd done before so I really appreciate June South June. I would be a sad sad individual So give him a hand Okay Actually, you know what the last year and I was another company actually I Released a kind of a cloud product to using cloud stack. So basically I'm kind of originated from cloud stack world, okay Now I'm just using the open stack So before actually, you know, they'll be into the next topic that's I'm gonna talk about actually quantum network I'd like to really appreciate, you know all the core team, you know I met the VC then we're going core developers and then we got a lot of good feedback and You know, we just rushed out We couldn't You know have any time to follow the blueprint or the process, you know We know that we suppose probably follow that but we just rushed out we implemented We just deployed for the live production and then we realized the over. Well, we forked our luck So we need to somehow start to do something for for the community So this is kind of you know for us like a first step to try to do that really, okay, so So quantum Well, while we are struggling with such an a lot of actually scalability instead of the issue We also try to look into what kind of the network abstraction layer we can try to utilize for our environment And then we started to analyze the what it is right now in first on basically and then Unfortunately, we found the several problems. So first of all the quantum API that people have Other than developer probably all people have some, you know, imaginary, you know Images probably, you know, the quantum API is only for manipulating database Now the question is who is really dealing with actual network object, okay So I'm gonna talk about that and the second one in the sense is there is no API around design yet Okay for the actual object and then we try to find the what what what could be the best to Plug in for us in a quantum framework and we found oh, there is often be switchy plug in there Because we we know knew that actually the open VCC is kind of future stuff. I'm gonna talk about a little bit You know shortly So we were very excited about oh, yeah, we can utilize the OBS plug. Well. Oh It's not there, you know, no feature we can really use for live production. Yeah, so here's our approaching We decided to add more intelligence Into each OBS plug in that is running on every computer node and We found also some of API is unnecessarily completely heavy. So for example get instance W and W in for it's kind of a Python language level of join operation rather than calling one My secure join operation. Okay, so we just added one more just a API Okay, just a call it. Okay, rather than calling Python level join See it we we're running more than 10,000 physical server. No way to survive with this kind of API Even though we found actually some kind of design issue that is needed to be improved But of course we didn't have time So we just focusing on or we just accept to you know, the reality and then try to expand Current communication pattern and just focusing on adding in a new feature we needed to have Let me give a little bit introduction about the open V-switch here The currently actually open V-switch is supporting open-floor version 1.0 and Then the most later version will be support soon. I don't know when but it's the ongoing in a project It is mainly designed for supporting, you know, different type of a hypervisor including KBM and It has been officially merged into mainstream since Linux 3.3 Okay, as a replacement of Linux bridge So it's kind of potentially very powerful tool. We believe that okay, so basically it consists of the two filtering rules and Associated action so we can do a lot of stuff like anti-IP spoofing or destination mega-dress filtering whatever so it's kind of can be considered as a replacement or Super set of EV table IP table, you know, Linux bridge and it has its own QS system Or we can externally use actually tissue traffic control So obvious is kind of a main fundamental component for probably you guys already, you know attended a lot of you know, the third-party controller often open-floor controller, you know Abandoned you know driving, you know, like a lot of plug-in, right? So they're fundamental actually element for them right now So you can find probably third-party controller, you know, big switchy NEC So they have like a full fledgedy of functionality But for us, well, we we try to find any any kind of a simplistic simple simple way to deal with those You know our needs so Then Then we just to try to focus on the quantum agent which is based on the OBSO. Okay. Now, let's just walk through What is actual workflow as is now even I think the grizzly is the same same status So let's say there's a no IPI, you know Call for creating new instance and then no back home to start to get the whole sub sub tasks to create new instance And this is very some simplified version. Okay. We are not using the ACP. Okay, so so create Okay, we need to create port by calling quantum API. Okay, and then quantum server got it Okay, create port in database. Oh Where is the real port there? Nobody. Okay, then no back computer into interesting me. Oh I'm the guy supposed to create. Oh, I don't know why No, but compute needed to create to have interface you that seems like that's not your job The motivation of quantum is okay Let's pull out all the net net net of abstraction from the Nova compute so that we can have completely independent module Okay, and everything supposed to API around but it's not like this is reality. We have right now Now interesting me Nova computer itself is creating tab interface and then set of necessary external idea on the tab interface The external ID means it's kind of metadata information for example, like a which instance you you your ID owns this tab or You know, this tab is based on, you know, you know, see don't which network ID something like that But it's kind of partially done meaning is the no buck no buck Computing serve creating time and then you know set up some necessary partial information and then assuming that I have 10 now and Then do let's stop thing. Okay I'm gonna just finish the rest of the task to create a new instance Well at this point, you know already send out. Okay. The aim is created active working. Okay. It's not working Because we didn't set up any necessary obvious flow yet So who is gonna do that? Okay quantum agent now occasionally looking external ID. Oh Somebody changed the external ID. Oh, I found somebody change because there is a kind of you know round Okay, it's the next round. We found that And then call again because yeah, it's partial information. I need to get some full information. Okay, so just call again quantum API Okay, got it. Oh, okay. Now I can do that. So here and then you know Deploy the necessary obvious flow now finally Dm really started to have network connectivity So this pattern basically is a completely implicit implicit communication pattern between the Nova computer and Nova Nova computer and the quantum and then you know, I got to know like there are some problems historical reason, you know In all your Nova computer had everything and then even though we just tried to pull out but still something is remained there and So basically, you know, even though I prepared it on the slide, but I just you know skip that because anyway The obvious I think we have to have really true The abstraction layer that is completely Independently you're working. Okay, and then supposed to be API around but as I said, we end up just using as is We just tried expand this communication pattern by adding Necessary more extra external ID on the same time even though we don't like that away. Okay so Here is our Bluehost enhanced obvious quantum plug-in we have a very interesting, you know unique requirement Right now we right now actually focusing on the one data one data center network problem Although we would like to address later. I don't know some point We have to really deal with the multiple data center issue in terms of this network. Okay Then second, you know, you know, the thing is that this is a really interesting requirement We have to provide direct public interface to the customer without using any netting Okay, that's our current and we try to find is there any way in open step? Oh, well There's no way is everything is for the public side We have to somehow seem like we are in force to use floating IP with the netting. Oh, well We cannot use this way. Okay, that's one thing But more interestingly while we needed to provide a direct to attach the you know, network interface for public But still they should not see each other. Okay, we are not allowed Okay, but if they want to talk to other VN through the public IP address Oh, we have to allow them. Wow is a lot of conflicting requests. Okay They should not see each other, but they want if they want. Yeah supposed to be Okay, so this is our approach We try to develop kind of a concept of distributed open floor controller using existing OBS plug-in as I said, we would like to just add Whatever necessary intelligence into OBS program because OBS plug-in is set already fully distributed model Okay, every computer node is supposed to running its own Quantum plug-in. Oh, yeah, structurally ready for us. Why not just use it now in terms of dealing with the Well, that looks I typically everyone knows the VLAN problem right 4k limit. We don't want. Okay, no way So our project approach is very simple. If you found solution that does not use VLAN tag at all Got it. We can just use it But if you cannot find the solution that doesn't use you know VLAN tag Then you have only remaining choice try to find any protocol that provide 24-bit ID Okay, forget about 12 bit in current VLAN But once you have 24-bit ID, you have 16 millions. Okay, you are fine I don't know how many years but at least for several years. I think it should be fine So QNQ the double tagged VLAN VXLan really a lot of protocol there But for this talk, you know talk we actually end up just using no tag for the other The private network, you know provisioning stuff we plan to use actually QNQ But here's KBS here Still our approach is not APR out because we just accept what it is now Okay current pattern and then you know, there is a fancy You know idea like using appliance, you know virtual appliance that has all intelligence in there As in just cloud step, right? Cloud step already have full fledged You know the network abstraction layer using actually virtual appliance approach, right? So it's load balancing everything already there several years ago already. So I would say In some sense is the open-stack mental abstraction layer is kind of behind Compared to other type of you know cloud platform. So So I think you know, I know you even during this week We had a lot of actual discussion about the L2 L3 agent how we can you know deal with all those You know, you know load balancing functionality. Yeah, so I think we're gonna get there soon So anyway, these are new feature we just added anti PR spoofing Open-floor flows and then multiple iPod as per port because some customer wants to use just multiple iPod even for the same the port It's not different like adding multiple port to the VM. This is different use case, right? They're gonna have just one single IP in a port. They don't want to have you know, managing multiple No, no, but using one. I just you know assigned the multiple public iPod and then QS and Then we as I said, you know, we just added optimal intro traffic flows within the same host. So I'm gonna show you now What we've done actual detail algorithm So this is given implemented right now in for some and grizzly So as you can see here Those are pair of virtual if automatically created by you know quantum plug-in There is kind of a tunneling. Okay, so whenever, you know pack it or write that guy and then the other end You just receive that package. So there is kind of tunneling and then Automatically assigned a tag like a tag one and tag two for the different network so that they can differentiate This is a current implementation of a quantum plug-in Right Flat, sorry. Yeah, so let me just correct it So flat there is a concept of actually the new type of networking force for some actually newly introduced We are using the concept of provider network. Meaning is like, okay, there is an external, you know interface We can just utilize it. Okay, so we are using flat provider network Then by the way, there's a bug in terms of using the MTU Seems like nobody really use this You know mode obvious plug-in. That's why even though it's critical bug meaning is that when the obvious add a new tag Okay, then of course, we need to increase the MTU size or otherwise is a you know packet is you know truncated and then we found Okay, nothing work But still that bug seems like it's remaining so we need to apply Fixed but anyway, so we just get it of all the tag So when did the incoming packet what we do is like, okay, look at the destination mega dress Oh, I know this mega dress is supposed to this Vm2. Okay. I'm gonna allow You go ahead. All other mega dress. No, no way. I don't have such a mega dress in my host And then there's a you know more detail level but I'll perfect because I'll put you know need to you know Have a lot of you know special care, but anyway, basically like this. What about outgoing packet? Okay From Vm1, you can only use 10 mega BPS. Okay, we just deployed QS on the port using everything is based on obvious Okay, everything is based on obvious flow only and Then put outgoing. Okay. I'm gonna look at your source IP. Oh, you spoofed. I'm not allowed You're not allowed. Okay. So look at the you know, you know IP so sci-fi this and then we can just immediately deploy Anti-IP spoofing. So as you can see here like this is kind of super set of EB table IP table or you know L2 agent L3 agent. Okay, we can do in one frame As I said, you know, I'm not like a saying this is good design or something but I would say this is kind of good illustration that Gives us like oh, yeah, it's good hint. Okay. Oh, it's kind of very potentially actually powerful. You can utilize that as And then last last use case is very interesting So let's say if Vm1 tried to connect to you know, talk to Vm2 through probably I produce Okay, then We just newly introduced another pair of you know, look back ports So that when there is really, you know packing outgoing from Vm1 and On their side look at the destination mega address. Oh, I know this is Vm1 Okay, I'm gonna revert back this packet through new virtual interface pair and Then on the left side. Okay. Again, look at the destination. Okay. This is Vm2. Well So this is the entire actually, you know, algorithm we deployed for our production For this is very, you know, very certain use case where When people try to utilize the direct attached interface still would like to protect like all these You know IP table, EV table functionality like this. You can just use it So all the source code already uploaded, you know, you know, we're gonna actually We're gonna actually show that later. Okay That last topic we would like to cover is actually operational issues While we managing such a huge number of actual physical server, of course We ran into a lot of operational actually perspective issues So reboot. Yeah, you know in many cases where we need to reboot to the host and the expectation Yeah, right after rebooted is supposed to restart to all the pre existing VM, right? Okay But actually there are circular dependency between Some of critical component in order to run no buck compute. Oh, we have to run report In order to run report, we need to run no buck compute. Oh Simple solution restart several times more. Okay Okay restart services. Okay expectation. Okay, don't touch existing thing only new stuff You need to you know update. Well, no whenever we start something smashing out or reducing time. Oh We just have unnecessary hiccups Monitor health As a micro you know illustrated, you know, a lot of QPD issue in a communication pattern issue We just manually added some of the new Special API called to check out health needs of the each component and the other thing the last thing I'd like to add here XMA customization. Wow in the opposite frame of no way to customize XMA It's strongly tied to each Python line, okay You know to add one single line. I had to record that part But any there were there are a lot of extra requests from our customer needed to customize XMA So we just did our own patches Okay, so let's have a wrap up. Okay. This is a full metrics What we've done so far in terms of stability, you know scalability and my scale QPD quantum Some of the walk around seems good some of what grind we don't like. Okay, but that's the you know, what it is right now Oh, sure. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah, I need to do some live demo But before that, let me just you know, finalize this this guy. Okay Sorry see here In order to really make the open stack succeed I think there are two important to think we need to address clearly first Scalability, okay, we have to have real scalable messaging system Right now open stack communities are saying that to the user like okay, you select out of you know Available system QP the active rabbit MQ. Oh, yeah, and I have fun. I don't know. Don't ask me about that No way no way or or if you don't want to do that like that And then at least we have to say it. Okay, don't use your post tag more than 200 or 500. Okay But any that's one thing and the second thing Netto we have to have truly good through network, you know abstraction layer or service until then Unfortunately, sorry, you're not gonna do any live Demo until we fix all this problem Okay, it's done Okay, so we are completely willing to start to contribute to back to the you know community Those old source code already be on there and then if you find, you know, you can look at them I may get oh, yeah, by the way, I forgot to run actually pap eight. So I think You're gonna follow that okay, but anyway, you can you can see the old source code there any indication So Mike It's a secret application We were actually running how and Skynet How is our hosting abstraction layer? Skynet is kind of our billing system true true story Okay, well Yeah, I think both you know like that there must be like a customer-facing API and some of API probably needed for internal Communication between the you know subcomponent in the open step But anyway both side we don't have such a thing nothing right now I think we're actually out of time. So if you have more questions, we can come up. We can talk about it Okay, thank you. Thank you