 Okay, I'm starting the live stream of the discus thing, you know, we would rather people be here. Hi candle. Hey, are you doing good. I tried to start it, you know, like other sessions and showed me the failure so I jumped here. It's interesting because it, it says failure, like in the, the web client pop up, but then it'll say that it's actually streaming in the zoom client and then if you go and check in like summit that opened up or whatever you like and actually see the stream so like, I don't know why it keeps saying it's failing because I've definitely had that too, but it's like ended up streaming anyway. Okay. We want you to be here anyhow. Exactly. All right. So the service is all of the security and, you know, we're a maple configuration. Is it do you think it's screen screaming at this point. Oh, it is good. Yeah, it should be. Okay. I'm going to hit record if I'm a host, just so that we get a copy of any chat comments later. Yeah, go for it. Cool. I think we're ready to go. We didn't prepare a particular specific presentation because we had done the one on one in the keynote. But if folks are not familiar with open info lab we certainly can give you guys a sort of a high level overview. And it's something in an earlier, in an earlier discussion at Project Harris that I thought was really summarized well some of what we've been doing and that was that the existing cloud providers. They're already doing things that cross the go across multiple open tech projects, and they can do that because they're running it all within their own space. And the goal here is to create a space like that for for the open source cloud. Yeah, so, so in some sense, by being prescriptive we limit sort of options on the other hand, we add more options because we can now optimize across the layers in a way you can't if you're assuming that it could be deployed in any arbitrary fashion. And that's kind of what project series is about is specifically on the computing storage interfaces doing those allowing those cross optimized layer optimizations, but I think it's much broader than that. I mean, it's really young people here is Arkady and me. I know there's Jacob. Oh good. People but I think everybody is quiet except us. Oh, so, so, maybe we should just ask people to at least raise their hands do you guys want kind of Michael to give an overview quick overview again of what open info labs is about. Yeah, I think that will be fair enough. Okay, give me a second and I will hunt down my presentation and run through the first couple of slides for you. I did link the etherpad and the zoom chat if everybody wants to go ahead and put their names as like attendees in the etherpad. It's going to take me a second to get it to share properly share screen. So, one kind of question I had, after the presentation is trying to understand the differences between having multiple zones, or multiple sides which zone or site is a different configuration per zone versus the federated model proposed. I guess like quick quick question to kind of maybe help drive that resolute that question resolution that question. If the presentation states the original context in which all of this started. It does talk a little bit about where it started. Yeah. Okay, cool. Then that will probably do it. So, the original mass open the this all came out of work that started with the mass open cloud, which is a partnership between Boston University Harvard MIT northeastern and you mass in one data center. That's in the western part of the state of Massachusetts where power and and space is less expensive. The goal, the original goal that we had six seven years ago when Iran first hatched the idea with with Peter was to create a self sustaining at scale public cloud based on an open cloud exchange model, so that we can have a marketplace for industry partners as well as researchers and industry to innovate and expose innovation to real users were sort of driven by this pain point that we couldn't see the underlying data that you need to do cloud research. That was I think what was sort of an initial an initial pain point that the researchers were feeling that led to this and Iran jump in if I'm if I'm slaughtering the history at all. The partners of the MOC specifically include the five research universities, the mass green high performance computing center, Intel, Red Hat to Sigma US Air Force IBM in the open stack foundation. The folks who are involved in open info labs are a broader collection and I don't know if I have that picture in this slide but I'll get to it. We currently, it currently has hundreds of servers. There are hundreds of students who are taught using it annually. There's CI CD for us, a couple of open source projects at the moment. And we have been deeply involved with using open stack since the beginning or on as has mentioned in some previous talks, how he was asked to talk it open at an open stack, meeting with a group of other people running clouds, any afterwards they started talking and that was what Iran realized that all of the folks who he was on the stage with were using a product he had helped create which would be which was the cloud director, as opposed to running it in pure open stack this was five or six years ago. But one of the main pain points that we run into is it's really hard to run an open source cloud and so when we started the MOC, and we didn't realize how hard it was going to be to actually stand up and run it on a day to the basis and, and the reason for that is because there's no commonality. Each project has its own method of each project within open stack tracks the data differently, and there's not a common flow between them. So for example when we wanted to do billing or monitoring, we had to sort of create it from scratch. And when you stand up an open source cloud typically you're doing it on different hardware configurations unique installation steps, different services. There isn't any commonality there either. And so what that means that there's no way to encapsulate as practices. There's no easy way to compare information between the clouds, each of us solves the problems we encounter in a different way, and we typically end up doing it by ourselves. It also means that for the open source developer of the software they don't have any visibility into how their software is actually being deployed, how it's been operated how it's been used. And so the community may be working on problems that the operators don't want. And when the operators come to the community for help. There's really no way to reproduce it or debug it. And so that's a challenge as well. The public clouds in some ways have an easier way of dealing with it because they're prescriptive. They solve their problems the same way if you look at one rack in one data center it's probably identical to the rack in another data center. They have real visibility into what users are doing, they can evaluate their software with real usage, they can evolve things dynamically with CI CD. And this, this thing that we came to realize was that we really believe that if open source clouds are going to be successful in the long run that the community needs its own cloud. So back in about 2018 we we reached out at the open stack foundation explore this thinking that they were going to be really pissed off but but what actually happened was Mark and Jonathan were like yes. Finally, this is this is what we recognize we need to do. And then we got together real life. We had a great MVP. It was a it was a fun meeting. It's hard to argue with it avoid do that should be the MVP for every project we all work on. And then we had a pretty clear goal and it's, it's still around that original Federation model. It's to create a federated large scale cloud starting with academia. We're going to start with the MSE. We want to create a highly prescriptive cloud and about solution, deploy it at the MSE, and then replicated it multiple at multiple academic institutions. And then we want to federate that into a large scale cloud. So that's sort of the high level. We picked academia. We picked academia because it's, it's a really good starting point, but like we needed to have success in one spot. It requires massive scale, it increasingly is depending on rich cloud tools. There's a history of collaboration and Federation across institutions. For example, the MSE itself is a collaboration of that type. The mass green high performance computing center is actually a data center run by the same five group of university, the same five universities. The New England research cloud is being created between Harvard and Boston University. Another example, there's just, there's a, there's a, there's a lot of, of experience doing that. And that's the part that's kind of even more exciting to us and that's thinking about what it would be like if every class that's using the cloud was being trained using an open cloud. And if the academic researchers could actually get access to all of the data that they needed to do their research. And I'm going to really quickly skip through some of the research that we've done because I don't want to take too much more time. Some examples are world map, which is used by tens of thousands of users around the world. There's smart cities projects that combine information on multiple cities. Cloud telemetry that identifies and stays problems in the cloud. These are all projects that have been going on at the MOC on on the cloud itself. But then there's, there's also the research into an open cloud. And that stuff like cloud federation, cloud security, cloud storage research, integration of data repositories, network research, monitoring cloud services, integrating FPGAs, elastic secure infrastructure, which is actually going on in a separate form right now, at the same time. And that last capability started out as a, as a thing that was going on in research at the MOC as a product in DMI. It's now being productized and integrated into ironic. And that's actually going to be an interesting part because what it allows us to do all quickly run through this. And MOC enabled all this research is that you have access to the real cloud data. You've got access to the metadata of the cloud. You can engage via the MOC with industry partners in the open source community. Then NSF funded open cloud test bed, or we call it the OCT, allowed us to create a new national test bed for this. We're using cloud lab with the MOC to enable reproducible cloud research. And this is the ESI part we can move the, the infrastructure around between the NERC, the MOC, the open cloud test bed, so that as we have the requirements and the needs. Ah, Jen just mentioned the ESI was been moved. Cool. And as, as we have, for example, paper deadlines, we can move hardware around to meet that. So a quick summary. That's, I think you skipped over the last, maybe most important part that all this stuff that we're doing. So we move forward one in the animation is that this this ability to grab a whole bunch of hardware and do terms experiments and access cloud telemetry and is actually going to be all available through to open source developers as well as cloud researchers because a bunch of infrastructure came from other sources. So this is something we're really excited for this community to engage with and try to kind of blur the boundary between I guess the research community that's using it for research and the open source community. That way, thank you. I was burying the lead and I, in fact, skipping over it. So, essentially, what we're, what we're trying to get to is starting with the MOC, creating a prescriptive cloud in the box that bridges the gap between operating system development and operations. We're going to deploy cloud in the box, we're going to gain real experience. And when it's ready, we're going to work with other institutions to replicate and federate it. And what we learned from academia and the other cloud operators. We're going to get reflected back up into open info labs back into the releases. And the goal is that with a common code base, we think that the institutions are not only going to be able to, to take, take advantage of the learning that goes on at each of the different institutions. So, for example, if you learn that you should be monitoring something at day 100, and you're using the same monitoring infrastructure as other universities or other cloud operators, and it becomes much more much simpler to move to a model where you can start to replicate that up into open info labs, back out into the cloud in the box, when you do your next update to it, you sort of automatically get those reports. Also, one of the things that we think is going to happen is when we start having a really common code base between these different institutions, they're not only going to be able to federate, but they could also potentially share operational skills and staff. That already sort of gone around. I just want to add one thing so one of the in the actual full part of the keynote that Michael recorded of which it was cut, unfortunately for time. There was a presentation by Scott Yoko and Wayne Gilmore with heads of research IT at Harvard and BU. So they're sort of leading the NERC effort, which is to create is going to be the production arm of all this, at least in sort of the Northeast region, where so now we have sort of this team, just give you context for the scale that these people operate at. They have 300,000 cores of compute in the MGC right now, with over 50 petabytes of storage so it's not quite you know the public cloud scale but it's a lot larger scale than even most enterprises. And so they're going to be kind of operating production arm. So things will kind of roll from the prescriptive offering we develop in the MOC can roll it out the first one will be NERC. And we're going to have them working as facilitators with this product community. So we're really hoping that this community, you know, our eventual goal is to have these operators being able to both work to take that to take academia and have them help them use the services here, but also to go back and give to the open source community feedback on how things are used. And this open source community have access to the telemetry of a real cloud that's being operated at scale this environment. Because frankly, I believe that while they have 300,000 cores of compute with these HPC environments, it's going to be even larger as as we start moving into this cloud environment, because it's not just for a bunch of specific computational uses but it is for the purposes needs across the universities. The other thing that is not in this set of slides that I wanted to draw attention to is, there are already a couple of projects like Project Keras, like the elastics or infrastructure that are beginning to move their home over to the open market repository, because they cross multiple projects and it starts to become our goal is to create an environment where people who are dealing with those sorts of issues where you're crossing projects, have a place to talk and work together and figure out ways to do it in a way that that is helpful and that I mean Julia has been wonderful at sort of making sure that we're doing things in a way that that that make that is a positive and not and not her tearing her hair out and yelling at us. And that's sort of the goal is to create a place where that can go on within within the existing structures that that exists. And I feel like I might be speaking out out of turn, but I, where some of some, I guess my, my original understanding is that what helps spawn some of this read the original research was this idea that, hey, I've got this data center that's basically a DR site that has no running hardware but I need to keep it running. Why do I, why don't I provide some of these resources to other people that use them. I might have to reclaim it really fast kind of scenario, and I think that's part of where this whole model really has a major potential impact because that forces you to take that in a box mindset, or that repeatable pattern mindset and have all the tooling and mechanics to be able to go. I don't know if it's going to build really push all this, or it's being taken from me. Right. Sorry, somebody else was speaking, I'm going to mute. Yeah, I was. Yeah, I kind of understood understood that model also that you would like to be able to have the full description of the cloud and be able to replicate it. Maybe on the same site or multiple site as needed. You know, the entire thing and the hardest and trickiest part is networking set up for the entire thing. No surprises there. It's kind of understandable. It's a little bit different from what I understood the earlier work was kind of the cloud as a service, where you kind of creating for lack of better name private cloud out of the, you know, larger cloud carving out for individual institution, user community, and creating exactly the configuration and the setup they need. You know, and then kind of bringing it back into the larger pool when it's longer needed or you need to reclaim it back from them and reconfigure it for, you know, whatever next cloud would be. You know, is potentially scaling the resources, you know, from one cloud to another is needed. At least from my perspective you guys are both right and wrong. So you're right in the sense that that that. Yeah, a fundamental goal was to create highly elastic. And in fact, being able to stand up new, new environments rapidly part of this was because we ourselves, you know, have because it's interactive the MOC, a lot of the uses are really highly interacting with their, the resources we have strong kind of, you know, general patterns we'd like to give up the resources to be used by other environments and we'd be able to like to steal resources from these HPC clouds. Similar to the open cloud test bed is was actually experiments then before publication for OSDI or SSP or as plus, you know, these cloud test beds have massive demand. They're pretty idle the rest of the time. So being able to having everybody that works in the same community have exactly the same share a common silo of equipment is the worst idea I've ever heard of, because it's impossible to get any resources when you actually need it. So, and we've actually worked with the Air Force here that has all kinds of infrastructure that is highly underutilized which is exactly like what you want right. You don't want the fire trucks in your area to be not 100% used when you actually have a fire even though we normally think of high utilization is a good sign. So there's lots of infrastructure that for example is there to deal with national emergencies where they'd like to be able to have other good uses of it when they're in national emergency, which is 99.999% of the time. So there's this elasticity, but there's also the very fact that the MOC is sort of taken off we have like 10,000 users of people on top of this. And we other academic institutions want to, we now it's gotten to the point where there's production offering, and we want to replicate it and federate it, and we'd like to be able to reach out to you guys to the open source community, and say, look, we had this failure, you know, but it's like so impossible for people to reproduce problems when they don't have access to the logs and all the information everything. So we want to expose all this information. And in a sense that's, that's where this is started is the idea that you know Michael was talking to Jonathan and markets are saying, Hey, you know we think we should be a cloud that you have full access to. So for reproducibility and eventually we'd like it to get to the world where, you know, there's a change to components. It goes through a CI it gets rolled out into through a CD to some portion of the MOC and then federated out to nerc and then to other clouds. I mean, that agility has been what's driven the public cloud so successful. And we'd really like to see that we really think that that's really important for the open source community and it doesn't really have it today. Does that make sense. So, again, this is sort of a bit of an elephant but this is a real cloud. We've had problems because every other cloud that we've talked to does things a slightly different way than us. Getting just a complete solution with show back to academic users with everything you need end to end to create cloud a box solution is part of the thing. And we want to have the whole thing highly elastic and growing and shrinking on demand. Because in our environment, it's it's critical to achieve high utilization. So, in order to achieve the high utilization, especially for the running workloads are usually the configurations of the cloud are optimized for workload classes. I expect that's what will happen and you kind of extract and think of that you have a blueprint which you build out of that center key for this class of the workload. This is how the cloud would look like and obviously it's shareable across multiple organizations so somebody want to take that and you know adopted for their use cases makes perfect sense. I think the more where I'm less clear is how to federate those you know what exactly will be federated what information what sharing will be done kind of identity management we should be required for that especially cross institutions who have no common identity management. Is a little bit questionable for me. And the one wish to me, you know, is more interesting is that extracting the data to kind of understand what data need to be collected, as you need to reconfigure those clouds for specific use cases. So we can clearly understand when you do in this, you know, you know, building the cloud continuously many different ways, you know what data you need to collect. We have some data, but I don't think we are ever targeted those kind of use cases. I think there's one thing that we that might be a case and that is if you understand what you have and the configuration and the differences, and you're able to measure that then. And you're able to understand the measure measurable this difference between the different configurations and you're at least able to extrapolate and not necessarily have to force a full reconfiguration of everything. And that might be some of what they're doing, based on research helpful. Yes, I think that's part of it. I think that I'd rather, I think we probably want to air somewhat in terms of having a number of prescriptive solutions that are reusable. Because, you know, the more that we allow full full sort of opening is really problematic. I think we're starting with academia for a number of reasons first of all because we're here. Because it's a lot easier to get telemetry information out from the cloud. It's a lot easier to get people to agree to exposing the telemetry information from the mass open cloud than it is from say the cloud internal to State Street. And, and to make that available to the open source community. I think that it's also the fact that in our environment. Identity is a solve problem because of a variety of efforts that have gone on between academic institutions. So that there's certain things that are easier to solve in this environment. And to get off the ground with Federation. That's not uniquely necessary, but to, to adopt what academia is done but generally I can go to any university in the United States and get on edgy Rome and and beyond wireless and so those those capabilities allow us to get off the ground a lot faster. And I think that in Federation, there's a, I think that the first thing is to develop these cloud of box solutions fully automated or as automated as we can, and building the community of sres across universities that are supporting that, which were that there are stages of, but the more this open source communities involved in helping us automate those, helping select a set of offerings that might be prescriptive even down to the hardware in the first releases. And replicable to reduce the barriers to deployment, the better. I think said that the other part that you guys both mentioned, we do need to carve out solutions for particular domains that are themselves elastic like. For example, Boston Children's Hospital wants to use this for a whole bunch of use cases they have a set of streaming compute demands and they need to have the compliance for some of the stuff. And so we want to create a repeatable environment that uses exactly the same models as the other parts, but that actually has a much higher compliance regime that kind of poofs appears when needed, or may grow in shrink. But is a small part of that larger cloud even within a region for these compliance requirements. Sorry, I kind of went on for a while trying to address comments you both made with that. So I guess, for me it feels like the next questions. How can we help. I'm going to let Michael lead that. So, so many ways. So, let's keep it less simple and short please. I know, I know, I know. So one, one thing that I thought about as as you and our cutting we're talking about the gathering the info from the cloud. We have a we're in the middle of trying to set up a working group that is focused specifically on telemetry. Very likely an area that that and and one of the folks involved in it is from academia and one of the folks involved from it is from industry. I think some of you may know, Marcel Hill from who said red hat, he agreed to participate on the condition that it be a an active working group with goals, not a talking group. And so that we're going to try and reach out to folks to participate in if folks who are on this call have a specific interest in that area. If you can drop your name into the etherpad that would be a huge help because we'll make sure that we include you in the initial kickoff meeting. The other thing is for those, and I haven't looked at who's on on the call. For those of you who who are affiliated who may be affiliated with hardware companies. We're running on fairly old hardware, and we believe that there is going to be a fairly huge demand. If, once we move all of our current academic workloads or many of our current academic workloads over to the NERC. We start to have a lot more capacity at the mercy to handle CI CD for open source projects. I'm aware of the fact that we're going to start running into hardware issues, both because of the age of some of our hardware. Some of it just isn't supported by some of the latest versions. And also because we expect there's going to be greater demand than there is capacity. If we're successful, there will be greater demand than capacity. So for folks who are who are involved with hardware companies. That's always an area that we're happy to get help on other areas are really would we know that there are a lot of operators out there who have their own internal solutions to a lot of the problems that we're facing. So for example, how do we get to a common monitoring solution. That is used across multiple projects and products. There are a lot of groups that are focusing on that. But I'd like to go more broadly within the community of operators. If there are people who you guys know who you can connect us with, we can get involved. So far the discussions about that have been more of a talking shop than a working and doing shop and what that's changed. It's interesting you mentioned this because at least in the bare metal community, the bare metal SIG. We've, we kind of talked about the same similar things and try and how to move, move from talking to actually doing or producing something. And we think we have a plan and it interesting enough someone brought the point that, well, it's kind of sounds like the operator group that's part of open stack community is kind of trying to nibble on the same idea. How do we approach this problem? How do we spread this knowledge? And how do we, you know, possibly provide the product feedback loop. So I think it this is a really good opportunity to try and. Get to circle us all together and go, hey, we may necessarily not have the same solution, but if we have the same basic goals, maybe we can kind of cooperate work together, cross advertise for whatever it ends up being. Yeah. And go from there. That would be great. We haven't really done enough outreach to that operator group. I only learned about it. I think from you about a month back. But in some sense, one of the things about this project is that it's the operators that are serving the driver seat. We want this to be the place. So, you know, Scott, you know, we have, we have the MOC and it's got a lot of users, but we expect NERC to rapidly, you know, these are the production IT groups with facilitators that work with researchers. We expect this to kind of explode the usage. And as it goes past this proof of concept and we sort of put Scott and Wayne and Michael in some sense of the driver seat of playing what we're going to accept. So we're trying to create this cross. And this organization, which will directly, not as a general thing, not just as an operator community as a, as a, we'd like to eventually have this go all the way to, you know, we really know strongly in the open source community and in fact in the research community, what the usage of this environment is. So what do you want, maybe turning this on its head because we want the open source community to really want to push this forward. What are you lacking in terms of visibility into how, at least in one environment and may not be representative of everything but it is real. What telemetry do you want? What, what would be important for your project to allow it to evolve? Yeah, that's, at least from my perspective and if there's anyone else that wants to answer this question by all means interject. But I feel like it's, it's a mixture of use cases. It's a mixture of problems. It's a mixture of, as you said, telemetry data. It's a mixture of the logging actually that we can view with the telemetry data trying to correlate a problem and investigate something back and I can tell you from experience the hardest thing to do is a try and get people to provide all that logging when there's an issue and be able to reproduce it. So when we have all this information be collected, captured, even continuously, even if it's only like a two week span of this high resolution data, then at least it's easier to dig in and understand and as long as it's easy to get at that data. So we're, we're in a great position to give you that in the sense that we can stand up and it might not be the whole thing. There's certainly some privacy issues to be concerned about. I guess we have already started, and this kind of came out the chameleon project adopting some of their stuff to anonymize, but I have, I have limited belief in how well the we should also, we're talking about saying up a cluster, which is, which people can only use if they sign off in the first place that all information is totally available to a community and it may not be to, you know, we'll have the question is what restrictions do we put other people have access to it, but we'd like to make it really, really modest right we'd like the open source community. Maybe, maybe, maybe there's sort of some requirement that each group in the open source in, in, you know, this community sort of vets a person, and that person signs something to say that they're not going to use it for nefarious purposes. But it should, we should try to give the minimum barrier so that you have full access to all the telemetry and logs. And part of what we think was going to help a lot with that is that the HPC systems already do a lot of that. And so there's, there's a history of that already. And the university has already have the concept of data sharing relationships that they really clearly understand pretty deeply. And I'm pretty hopeful that it's going to be a fairly straightforward process. Yeah. I guess kind of going back to beyond or before the day that it's also kind of, it's almost like those wants needs and dreams. And someone needs to write those down. And when it's not a developer doing it. It's intimately involved in a project. It has a little more weight. When it's someone that brings a compelling hard problem. Sometimes people actually pick up and run with it. We actually were in the current outreach cycle. And someone, one of the applicants that they're not getting paid. They're still in the applying phase. And when they started with the tooling, they actually noticed they like click the back button notice one of the items we had in our backlog is like a fairly easy task and we're like, I'm going to take this on. And then they actually really enjoyed it apparently, and point where they have talked about it, but it's just those are the kinds of things that we kind of need to track and understand you and revisit. I guess kind of the habit of also just logging them in a bug tracker or whatever also doesn't really help, because it loses that identity it loses that emotion behind it, and loses a lot of the appeal of the problem. Why is the thing hard? Why is this not already been solved? Or is this a really simple problem? If you turn that if you, if you take the water bottle and do this, does the problem become easier kind of thing and that's sometimes it just takes a different perspective. So how do we do that? How do we get our problems, you know, in to this community. It's really, really visible. So to make things really visible in this community, that's a good question. I mean, because what you have is you have each project has its kind of own entry path. And when I say each project, it's probably better to say each project of each sub project or each sub project of each project, because if you look at ironic for example, you have one process one way. And that's kind of where people build consensus with no you have another process in the somewhat different way with another library might be just put the patch up. So I think kind of what the thing is, you kind of build it. I keep using this word and it's kind of what this event is a forum on a regular basis where you're kind of exchanging these thoughts ideas and reaching the same page. And for example, we had so much CERN, they have a rank they like to kind of go on every year, or every six months. And they start going on the rant today. And so I was like, well, I'm almost there. I just haven't gone to yet. So we need to get them to work together and kind of take that next logical step. I think catalysts there has to be some way to track and some way to bring this stuff together. The best way I think is probably collect people together and give them a reason to somehow cross cross project. You know, maybe at PT PTG meeting and then let the TC try to figure out how the best way to program across I wouldn't even think that because I think that it's putting more work on their play and more weight. I think it needs to be informal. I think it needs to be roughly structured. And I think it needs to also be kind of fun. So I, I think that I want to back up one second, Julia, did you say it needs to be formal or informal, somewhat informal but also have structure. Thanks. I think that partly, you know, we sort of had this early conversation, I don't know, two years ago now or something, or a year ago, about how we were. We were doing a lot of the stuff that the ironic was doing with this, our research, a bunch of code written by a bunch of graduate students, and it was being used in semi production basis. And then you guys from the ironic team said, Oh, no, we want to actually integrate this and it's being sort of a one year effort but now we're rolling out replacing all of the hill BMI stuff with ironic and doing our first experiments with that. And assuming this is as successful as we hope it is, we then want to have ironic on top of ironic and the HPC clusters on top of this, and you can see a 200,000 cores of compute, shifting over to using this if all goes as planned. And that's like a massive experiment, I think that the same, we have the same kind of really close interaction with the Keystone group. My hope is just that we're showing these proof points of, you know, you kind of can visibly see something, and then specifically ironic you can have full visibility into what we're doing because of that. I'm hoping that it just becomes. I think that success is the way to do this is have particular communities that we're able to interact with and then we're rolling it out for real here. And so I also think that if there are questions, and maybe there, there should be answers, and my mean by this is the idea I kind of had an ironic session for the Burmese sick was we have a topic areas that there are subject matter experts for that are very complex that people try to avoid. And it's like, people don't have a good understanding of them, because it's, it's kind of nebulous and doesn't, it's not clear. So we're thinking our thought is we make little 10 minute videos, and they do on a month, and have as part of this gathering. So people can ask a couple of questions, and someone can do the five minute. Here's how this works and here are the pieces or layers and maybe it's only two minutes, but you know it kind of we generate some thing and we generate interaction and also have a record output. Yeah, I like that. I think beyond the ironic one I think you'll probably need to have deeper interaction with triple O guys, because ironically set up your hardware, but, you know, deploying the cloud itself and the configuration of the cloud with each of the components right that's really falls on the tree. It's not the only way of doing it but you know, in order to deploy the cloud, you know multiple times proactively basically you have the you know YAML templates, which defines and you know we actually use it normally for our QE guys all the time. Yep. And I think Kendall just noted that we are out of time. So we should continue having this discussion so how do we have this discussion do we just schedule a call or meeting and start tweeting or something. So, a couple of things. I'll drop into the, I'll drop the various IRC and other channels into open info labs. We have a group of PTG slots next week that we have not necessarily associated with anything but it feels like this is one that we ought to associate with with the PTG slots. Another quick thought that I had as we were speaking was one of the things one of the ways that we could use help. Maybe at a metal level. We're very small team, figuring out how to incorporate help into that small team in a way that we can know what's going on. And so maybe I'll make that as a separate topic. I think it all comes down to just somehow we have to bring the people together. Yeah, I think that's exactly it. I don't necessarily think that it's bring one person or a couple of people into yet another community or another room. It's more bring them everyone to the same neutral place if that makes sense. Is email a good way to reach you right now, Julia, or are you over this week. I'm overwhelmed at the moment, but emails probably the best way. Okay, the reason I asked that is I was going to ask you. I'm going to send it out to all the folks who added their name in the ether pad. Talking about potential PTG meeting times, but it sounded like there were some other people that we might want to include in some of these discussions that you knew of. I'm just getting their names, either in the ether pad or an email would be great. The operators on the tube a guy specifically it's probably the person to talk to and also Arnie at CERN. Arnie, I can't think. John's last name. I just noticed that I was the guy. Thank you. No problem. If you can drop those in, since it sounds like you know the names, if you can drop those in somewhere in the notes, that would be a huge. Yeah, I think what really is talking about the maybe kind of a joint session between the bare metal seed. And this work. Yeah. Not working group yet, but you know, I think one of the notes was in the chat was also probably also see if the scientific working group has interest because it seems like we're all kind of talking about similar really things. Let's talk to about them together. Yes. Stig generally attends the open info lab meetings as well. So, so we have at least that relationship that relationship at least we figured out the rest we're going to figure out. Okay. Oh, thank you. I appreciate the time. Thanks.