 And I didn't know they were going to do that, but it's a good thing. We're saying anything to race. I know really Welcome everybody Let's go ahead and take our seats. We'll get started here in just a few seconds. There are some seeds Oh, my god, those of you on a Oh my god, brave though There are a few seats right up here gentlemen. Yeah, you feel free to I Am your father no So your glasses you can near-sided and sit up front. You can read the slides All right Well, you're right on time go ahead and get started Welcome everybody. My name is Joseph. Yep. I'm the senior technical marketing manager of Marantus And I'm here Bruce Matthews. I'm the Western Regional Solutions architect for Marantus USA and we're here to provide some information about Marantus open-stack deployment versus Red Hat open-stack deployment mechanisms Yeah So putting these up head-to-head We're really basing this on a as objective as possible We could be to do a full deployment from bare metal just using the software that's available on the on the the internet download that and read the documentation and deploy all the way through to a Minimally viable production level open-stack deployment so you can start actually launching instances and working with workloads in your open-stack So we're really good to be measuring What's going on with this? Yeah, just a quick note about that. We really try to be very objective and Pick metrics. No pick metrics that you could actually rely on as opposed to you know, picking here and there So, Joseph, please feel free to take it away. Thank you So to what we're looking at here is really focused on the day one and day two operations of what's going on with the deployment of these two distributions and what we're defining as day one are the initial installation and kind of the Deployment of your first initial open-stack environment where day two is the ongoing operations things like upgrades updates Growing and violent scaling up scaling it down and that kind of thing so we're taking those two major delineations and looking at the day one area day two area and Like Bruce had mentioned working on a definition from the beginning and trying to get to a minimally viable Functional open-stack environment from both distributions and what that means for us today It's just a a what it takes to launch a single compute node in a single controller node to have a functional up open-stack environment Yeah, so we ended up using the same basic Setup from a physical hardware standpoint, and we'll walk through the differences between them that we had to Achieve both distribution deployments. Yeah, and to make this as fair as possible again Like we said before as objective as we could do to be kind of isolated in this We used just the public documentation and the software that's available so for us to download we tried to do it just purely up by ourselves without reaching out and getting any support or technical Assistance in that fashion. Okay. It was a DIY. Okay. Yeah, exactly So what you see here on the screen are the the actual Environments that we built with with both of the distributions on the left-hand side using the exact same hardware So we had four bare metal nodes that we first installed the Marantis open-stack in environment with and then we wiped all that clean and Deployed with the red hat open-stack platform 7 using their director Deployer and on the left-hand side. We've got four bare metal nodes. The bottom one here is a virtual It's a ESXi server, then it's purely used in this case as a our jump posts that we logged into remotely to then Remotely manage the other bare nodes to launch like the web UIs Goos and to be the mount points for the the software just for the other bare metal nodes to to Mount as the the directors so with that on the Marantis side on the left We start with the the fuel master node the second one from the bottom that is the bare metal node that is Installing the fuel master which will then go and configure and deploy the open-stack cloud Which is and again our single compute node in a single controller node on the other two bare metal nodes with this We've got the networking In play already one of the major assumptions we made with this comparison is that the the physical layer of everything That's that's being put together has already been done in both Distributions, so we're not using those as measurements in this deployment model So the all the physical networking from the cabling The racking all that stuff all those steps involved and in defining the software networks in the switches themselves Are not part of what we're comparing here So they're equally done and the networks are equally set up so Joseph. I noticed that there was a Difference in how the IPM is are connected. Yeah, very true very true So in the Marantis side you notice that we only have the on the left of red IPM my network Connected basically just from the ESXi server to the fuel master node That's on the Marantis side That's only used for the initial console to boot up the machine and to mount the ISO and to launch it But on the red hat side on the right hand side on the open stack platform with director They are using ironic So the the project with open stack that actually is a bare metal node management platform that Reaches out through the IPMI network and actually does power controls and things like this It does have to have routable connection from the The administrative network on the from the director an under cloud seed node So those networks on the back and again, those are already predefined from the start of our Exercise here, but yeah, it's good exhibition. Thank you Also continuing the the directors on the red hat side The four bare metal nodes as they are configured are the same ESXi server. Nothing actually changed there but now in the first bit more that we're using is the their seed node which installs their the base rail operating system their director software and Hosts the under cloud from there you define and launch the over cloud Which is the if you're familiar with the triple O deployment model That is the the end user access point into the actual open stack cloud They're going to be interacting with and that will be the physical controller and compute node there and their networks associated with those so from a high level what what's happening here is that from going from bare metal and an ISO up to an actual usable open stack environment From a Maranta side it takes approximately what takes 12 steps with a couple of optional steps in there Most of these steps are gooey or wizard driven So you're guided through the entire installation from bare metal through usable open stack And it took me an hour and 20 minutes to do it I don't know what superhuman it was that was able to do it in our marketing department somewhere that could say They can do it in an hour But in our bare metal environment it actually did take me we went through this a few times it Took me just about an hour and 20 minutes to get fully deployed and installed for my so yeah, and the alcohol looks anonymous 12 steps That's not this 12 steps that you have to learn but good point So over 80% of the steps in the Maranta side were either wizard driven or a gooey that you're just being prompted for Information and it's asking for it does the configuration of open stack and everything like that all for you on the back end as it deploys On the red hat open stack platform 7 perspective this one took 48 steps With this most of which approximately 80% of them are actually CLI commands that you're manually entering in the syntax You've ssh'd into the nodes and are either manually configuring Configuration files or what not before you're actually deploying When director finally comes into play you do have some nice steps with ironic and whatnot that are there But the vast majority of this is in CLI which yields a little bit to you know This is human error, which we'll talk about Documentation yeah, right exactly right so this is where you know CLI really requires really good documentation As we'll step through some of the features that we actually ran into and very specifically well I was I was gonna actually mention that we ran through We didn't just do this once we actually ran through it several times so that we weren't using our own human error as Mechanism for considering the three hours versus the hour and 20 Yes, thank you. Yeah, precisely. All right, but from a documentation standpoint Of course, you know all documentation can always be improved and everything else we went out to the open stack Launchpad sites and looked for Marantis documentation and we ended up with six right right The errata in the pipeline that need to be fixed and are in the Marantis documentation had had six outstanding In the queue now that to be fair the Marantis open stack version 7 has only been out now publicly G8 for about a month I don't know where on the red hat side it had been out for about three months and we typed in exactly the same query Red Hat OSP 7 documentation and we got this strange little error It said oh my goodness. This list is too long for bug zealots little mind and I thought to myself Wow, your mind must be pretty. Oh wait. No, there's the list is too long. Ah Okay, so we refined it a little bit and as a result of refining and putting in additional Parameters with the query we finally got it down to 73 So it was 73 in about a three-month period versus the six in about a one-month period between Their version of documentation and ours from errata that still have to be corrected Yeah, so from a user experience that that we've kind of Looked at and what we found from this is that looking at the documentation from a Marantis and a fuel perspective This is all evolved directly from the first level of products that Marantis You know went GA with about three years ago and it's all just a direct evolution of that same product line and same with fuel as a Deployer to be fair here as well as well with red hat. This is their first foyer into delivering Triple O as a deployment model. It's also the first first GA model using director, which is a Which comes from their their you know vans acquisition last year the first deployer model So this is really their first version of both of these is going out So it it's the the first growing pains which are to be expected and again These these are being fixed in their documentation as we go and and and people have accused me of having a little mind too So I don't feel bad about the little mind being present So Bruce go ahead and take us through what it takes to deploy Marantis open stack with fuel. Okay. I'm really going to go through this fairly quickly there's only a few steps for us to deal with you can just go on to the first slide and typically you go through three different phases during your Installation it's the installation of the media and all of those kind of things There's about five steps installed with that or involved with that The no discovery becomes a single step of going out over the network and finding everything over the admin pixie network And then the deployment itself breaks down into about five or six steps And we've given some rough times as to what was Related during those what we deserved So what the first thing that you have to do after having installed the ISO and booting it is just set a few parameters In this case, they're setting the network parameters for ETH 0 You walk through setting ETH 0 ETH 1 if necessary ETH 2 if necessary for the deployment And then it's followed by setting up the DNS services the NTP services just walking down the chain and Selecting them and moving to the right and filling out that information Now you see this as just the screenshot here Look makes it look like something we're very familiar with with regards to like bio settings and computers Well with the first time I actually ran through this and I'm not a professional deployer I'm not in the professional services organization. I was doing this truly as a First time touch through this but as I'm going through these menus on making changes you might be The assumption would be like I can make these changes on all these different pages and simply click apply at the end and save changes And continue as is as offered of the last page there So when I did this I went through and continued through and deployed config finished the the fuel installation and Attempted to do the verification of the networks and a lot of my settings were failing and I didn't understand why So one thing that really checked me up is during this installer phase right here Every page that you click on and enter information information in you have to actually click the apply but before you change pages Otherwise you lose those changes. Yes, I didn't know that right first couple times through that took me several hours of troubleshoot Figure out why so we'll put it. Yeah, we called out a Homer moment Yes, so so Okay, that was just one minor trip up on my side that took me Added some extra time so from there after you filled out that any information and fuel has been fully booted up Now you're in the wizard-based Graphic user interface that fuel provides logging in is a fairly straightforward thing. We're founded on CentOS 6.5 the fuel accesses on port 8080 over the primary network and There optionally there is this step that that Joseph had pointed out to set the IP tables interestingly that step is to allow the public access from the fuel node out and Typically folks have been putting this on the data center network So that additional manual step is no longer really required because the data center network is the data center network It's already on it Yeah, but we did include it in this this example just to be is apples apples Because we did the same thing on the red hat side as well We used the the fuel master node here as the gateway for the open stack environment to get out to the public and We did the same on the on the red hat side as well Okay, so now we're into actually starting to define a cloud first step is of course to make sure that the nodes themselves get Pixie booted across the MN pixie network and placed into a bootstrap node, which is a very minimal operating system version to host the the environment and to do the data collection back for Compute RAM how many storage volumes you have and all of those kinds of things It's a very straightforward process of simply booting the node over DHCP It picks up an IP address and lays down the the bootstrap image From there we've now got a framework to define and really there are only five things that we're dealing with in terms of that There's the release. Why do we ask for it because you could have multiple? There's a hypervisor You can select a multiple hypervisor by the KVM QEMU or an ESXi hypervisor within the wizard templates the networking functions you can Either define them as VLAN VX LAN or Nova networks at the moment flat DHCP from the GUI itself and we'll mention that Network model a little later from the storage aspect you We have to collect that because you have two options You can either deal with Ceph that hosts all of the storage operations for Supporting OpenStack or you can support LVM, which is sort of the standard for Cinder and and other ephemeral storage support After that you end up defining which additional services you want to put in To this deployment those include Sahara for elastic Hadoop Murano for application platform as a service or Cilometer for doing alert Monitoring and tying in autoscaling and self-healing within heat What are the additional things we've noticed or that we want to mention here is that on the optional pieces that we see here on The side on the slide we can install plugins to the fuel master node here That can then create additional roles within the UI here and then can be deployed to the open-stack environment through the fuel Deployer that gets added. So we're gonna be mentioning that in direct comparison to the plug-in management in the red hat piece as well As later and it did go ahead. So with this also we have the idea to work with network templates, which we did not do in this direct comparison But one additional mention option that Bruce didn't mention yet was that we also had support DVR as an option to deploy with fuel So that is nice to be able to as we know in an open-stack Some of the networking pieces can be bottlenecked So DVR helps to distribute that out and have better performance and right that's new in 7.0, right? Correct Okay, and after you've defined that framework the next step is to assign the nodes to specific roles within open-stack, right? So you've got Basically Your controller your compute your storage by default Telemetry can also be assigned or you can do a base OS Which is just the operating system of choice in this case the Tsubuntu and the neat thing about Working within fuel is from the very start prior to actually deploying the node You can actually define how the networks are going to be configured on the physical platform Before it gets deployed so you can set up bond for example you can set the MTU's and the Low balancing type from you know standards to all the way up through LACP and Basically define all of the connections that you're going to have for your public your private your management your storage Networks Going through which physical nick and then the next thing is to be able to carve out the storage and and how much storage is Available for any given role that you're intending to assign to that node So for the base OS from 50 gig you can expand it up to however much your disk has available for it You can then set up the seth journals in a specific place for the storage nodes And once you've got all that configured, what's what do you do then? You're Ready to verify the network. What does that actually do? So that one actually checks all of the settings that you set in the network tab of your environment So it ensures that your public network is actually available to the individual nodes that it has to be for the deployment It ensures that the storage traffic can go between all of the storage nodes appropriately It ensures that DHCP can access the admin pixie network across all of the nodes all of those tricky things that may be problematic if you did it If your network folks didn't communicate properly with you or if you fat-fingered something which I have a tendency to do So being able to verify it at the end if it's a good thing Well, one of the other nice things that we observed when we first deployed this is that we are Presented at the end of this deployment with a default external network and an internal subnet with routers all built in So we can immediately start launching instances that are actually included with the distribution as well Some test VMs that we can actually start launching workloads right away after this this point. Thank so It's very nice. So perfect. Thank you, Bruce I want to take over here and now talk a little bit about our experiences and our observations when deploying red hats open stack Platform 7 using director again. This is just from the we would look at this diagram from the ground up We're going putting the main the main installation of the base OS Doing the the subscription management Installing director then configuring the under cloud Which is then going to configure and then deploy the over cloud Discovering the nodes that we're going to be assigning for that over cloud and then physically deploying it So the first section here is actually obtaining the bootable iso mounting it and launching into red hats Enterprise license Operating system right away That with this you're required with a subscription to the appropriate repos now this Is an interesting step early on on the first release when they first released back in august when I first went through this I actually had a random some issues and the documentation has been Subsequently corrected But when you work with the repos from the base operating system and then add on the open stack platform specific repos During the end when you run through After you finish the operating system you've installed director You do another update at the end some of the the dependencies for the open stack Were being overwritten by the the newer newer rel Base os dependencies, which were then of course causing problems down the road So they've subsequently fixed that they've added some some commands in the documentation to to disable the the rel base os repose and install Priorities specifically to the open stack ones, which which did fix it But that caused quite a bit of of troubleshooting and time Involved so this step is approximately 25 Well, it is 25 steps of CLI entries into installing and configuring the director and under cloud Defining the networks and those pieces which are on the next slide here So manually editing this under cloud.com file defining your networks defining your pixie nicks and and dcp ranges and the like With this one minor thing that did trip me up here is that actually two things that the The the default configuration file that is included with this is the example That they give you has the the nick that is assigned to the pixie network as eth1 eth1, which we're all familiar with right And i'm an old school guy So of course i was thinking yeah, that's the second physical nick in my box and that happens to be on the diagram The correct nick to that network But in going through and actually deploying the under cloud at this point It does fail and having to go back through the logs of the installation piece Did find that it was failing and then searching for this eth1 this this you know Mythical eth1 nick that didn't exist and i couldn't figure out as to why but it Finally figured out as to the the new default method of installing rel at base os includes this bios dev name and this consistent Device naming scheme so it was naming my nicks More differently, so you know as exactly precisely so that took a little bit of troubleshooting But now that i know that that's always just appeared You know a little bit of human error and just some of the trip you have to cause a little time So just the last point there that he makes the service passwords That's the admin password for the ipmi port That's no this is these are the service passwords for the individual open stack services themselves You have the option to set them in the configuration file ahead of time and store that in there It might be an interesting thing for your your cso, but Concept having to store that somewhere But but you know another point that i wanted to make here is that in the configuration file now that still exists Even the current documentation they have you create a service certificate To enable ssl communications between the open stack services That way you have encrypted communication between the services, which is nice, right? But the the commands in the first round of documentation did fail outright We did find irada that were Submitted that showed that the syntax wasn't correct. We did fix that that's fine gets through that We install it correctly, but then deploying it would come into an issue again where The keystone services themselves were failing to use the ssl certificate and therefore communications between Things were two into the services were failing now The irada that does exist out there still is in the pipeline is still to be completed the fixed There are still some some workarounds to be made, but that is something that did chip us up and it still exists now So the only thing we did to to kind of as a workaround to get passes Which is simply disable the ssl communication between open stack services, which has allowed us to to complete the installation Yeah, that was where bugzilla's little mind came in right that was the first yeah That was the first where we found that Now the next step is to manually register the nodes So this is a little bit different than the ranta side that the bruce had mentioned this is where we're using ironic now to to Manage the control of the bare metal nodes that are exist So in this case we need to go to each of the nodes find the ipmi address the credentials to log into that ipmi interface The the use of a password as well as the dedicated nick mac address for the pixie network Now in my first pass through this we've got just the two nodes that we mentioned a single controller and a A compute node that we're registering with the undercloud to launch so with this the The nick mac address as we all know is a fun You know mixture of of hex des hexadecimal So I actually did fat finger my nick mac address in my first entry into my controller node Now ipmi worked so I entered in the credentials which are stored in some file somewhere again, which is um another fun thing for the cso, but the ironic Correctly went out found the node powered it on Saw that it was there But because I had fat fingered the mac address It didn't know what to do with it It couldn't find it on the on the pixie network at that point It actually locked it into maintenance mode, which from the director UI I wasn't able to unlock it or do anything with it from that point So I actually had to log in through cli Manually unlock it take it out of maintenance mode and then delete it that way that I could restart and re Re-register that same node with the correct mac address system So that was our second homer moment during the process right? Yeah So so I imagine doing this with with just the two nodes that I did But I can I imagine it's being quite difficult at scale at 100 or 200 nodes Trying to manually manage Individual mac addresses ipmi addresses credentials for all these nodes But this is what's really good about this though is that with red hat using ironic It is increasing the awareness of ironic and pushing out there and increasing the development of it, which we're all for We are still waiting for some things on the maranta side before we enable it on our side But but we do like the fact that it is getting awareness and gaining The feature functionality having that power control The of power cycling and keeping the machines is powered down when you're not using them is a nice benefit The next piece here is to actually Now that we've got the director piece and the under cloud defined now We're going to configure and deploy the over cloud. This is where we're first time we're logging into director the ui and are Obtaining some images for those bare metal nodes They're not included in the ISO or the or the software that you've got there So we need to go out and manually Collect them from the repositories not from the repositories from the actually from the website. They've got three three Example images you can download. They're all less than a gig. So it's not too bad I didn't include this as as part of the steps that were required to pull them down Import them into the the under cloud to then deploy to the over cloud This is another place that kind of tripped me up when going through the documentation and reading even the checklist That we'll show you in the next slide There's a you have to assign a flavor and an image to the node Roles that you're going to be deploying And because we're doing just the minimally viable Open stack environment of just a compute node and a controller node I by first time through only assigned flavors and images to those two roles And then assigned those two roles to the two compute or the two physical nodes I was using Attempted to deploy the checklist said I was actually okay, but it did fail with the fact that oh I didn't have those defined those definitions sent for all of the default roles even though I wasn't using them Right including storage, which you didn't have a node to do precisely right So so with this one of the observations that we made is that there's no No active Pre-deployment verification tool We do have a nice checklist there that you can see at the bottom right of your screen It does show that you've gone to each of the configuration pages and made changes But our observation is that it wasn't really doing any functional checks beyond that outside of the fact that you did go to this page and make some changes It didn't test it for as Just the comment I made before I was I did as this checklist shows assign a role for a controller and a compute. I did that I tried to deploy but it still failed With this after we did get this Truly deployed we did run into a couple more issues here. We do have a fully Base open stack deployment, which is great But when we tried to log into horizon, it did immediately come back with a oops something went wrong And we did find in the errata that this this default deployment does not install the cinder v2 Endpoint and service So we had to manually log in through cli install those and then we were able to successfully log into horizon And at that point we were met with another error that our compute service endpoint wasn't there so we again had to Manually install that do that. Yeah, the errata is is Defined in the documentation. So it is there. It is getting fixed on the back end But with this also there weren't any networks available. There were no glance images Included in the operating system. So we still then had to go out to open stack.org Download some some glance images that we could then start deploying into and start launching some instances We finally got to that at that point that we do have a nice Tempest set of tests for a deployment health check at the after the fact that you've deployed It's very comprehensive as 325 tests that it's running and you can't do these online But it does take a significant amount of time to to complete them. Yeah represents the full tempest test suite Yeah, you do have the option of running a subset of that with the smoke option, but uh Yeah, it is still through the cli So now at this point we're going to now we've finished kind of the installation of deployment It's just a real quick comparison of some of them highlights of during that what the user experiences are for those main ones The first level is the extensibility and I I apologize. I forgot to mention the the plugins on the slide previous There was a bullet point for it So with the red hat side, you do have the options of plugins And you install those using cli You you have to log into each of the individual nodes that that plugin is going to be affecting and install them there We do have on the maranta side a well documented Open source framework for building these plugins And having you can build your own and install them into fuel and have it do the deployment. It's really nice Red hat we were not able to find a an open source kind of kind of framework built out like this The director piece while being open source is still relatively tightly controlled. We couldn't find any specific frameworks for that But it is out there. So I'm where I'm sure but before that as Bruce had mentioned the pre-deployment verifications Marantis has a very active attempt to go out and test all these Networks that you've created And make sure that the services are going to be able to function and actually does those tests Whereas on the red hat side the checklist of steps While it does help guide you through what it is that you need to do It doesn't have any actual functional verifications of what's happening Um post-deployment One thing we we didn't go through or didn't talk too much about Was on the post-deployment side on the marantis piece in the fuel UI We have the selectable dozens of individual selectable tests that you can run simultaneously or otherwise Through the do api tests so they cover sanity and functional tests and high availability and cluster configuration and Platform certification for the apis for morano and all of those kind of things precisely and on the red hat side it gets very thorough tempest tests All run through manually the cli Test there. I weren't able to find any documentation or the in the director ui itself to to run those tests And bootstrap in the environment you have pixie booting People have done it for years. It's a well understood technology We take advantage of that to ensure that we can get the Operating systems deployed out to the nodes right and part of the red hat They they use pixie on the back end of ironic And again, we're we're we're very happy that ironic is being Used here and publicized and it does have the advantage of having that that ip and my power control for for those bare metal nodes So with that I want to jump into now the day two concerns So the idea of now that we've got our environment already up and running how we're going to maintain it over the long haul Doing upgrades and and the like So a real quick Announces and look everybody's got their positives and they're not so positives in the environment We've just tried to sort of define some of those in a very Simplistic way here in terms of the categories that we're dealing with in terms of change management and logging and monitoring and management ongoing Your update processes with between the two folks Upgrading between major releases and support for multi cloud and I guess that the key differentiator is really the bottom one That we can do more than one cloud to there's not a one-to-one relationship between fuel and the number of clouds we support Whereas the under cloud can only support one over cloud at this time Right right. We'll talk more about each of these in a little bit in detail in each of these slides So from a moranta standpoint in terms of the change management being able to scale your cloud up and down by adding nodes and removing nodes when necessary All of the health checks that can be run Um, you know, there are limitations once you've deployed on the configuration changes you can make Uh, for example, you can't add a plug-in after you've deployed and enable it within uh, Another until you deploy a new cloud and that's the next time you can do it Monitoring you have the ability at least to centralize your logging and to pass it on to others But you've also got the ability to establish plugins for zavics for nagios for uh, elastic search and influx db Grafana if you're into graphing the results of those kinds of things from an update standpoint We've now exposed our repositories for both the vuntu environment that we support and for the moranta's open stack environment itself So that you can grab updates apply them with a shell script and Voila, you can down. Uh, you can update your uh nodes in your open stack environment And then from an in-service upgrade standpoint, of course, we're working This is always something that open stack itself has had difficulty doing But we've got a team of our best people focused on solving that for an in-place fuel master upgrade and scripting the open stack updates themselves Yeah, a lot of these these pieces are are fairly on par with the red hat side So from the change management scaling up scaling down, you know with the ironic ability to to go out and register new nodes Add them to the deployment is good Um, they do have the the tempest test after fact the same, uh with moranta's Um, the limitations here are some configuration changes mentioning the the plug-ins Manually attaching those to the individual nodes that are out there The one one observation that we made um on the ux on the user experience perspective, especially on this this logging We'd mentioned, you know, the troubleshooting as we're going along trying to look at the documentation or whatnot the the logging and the operational tools as those are calling it in the uh, um Red hat open stack platform is right now. It's in technology preview So the centralized logging the monitoring the alerting and things like that aren't in production level support right now So we were we were having some issues trying to go through some of the logs trying to find where it was we were going within troubleshooting side They do have support for plug some plugins that are out there. They they are Integrated in with the nagius alerting and pieces there as well manual updates for the The updating of the using the standard yum updates the same using satellite using update repose and things like that very similar um, and then with the the upgrades from going from osp 5 all the way back dating before they're using the triple o deployment model they do have very explicit long A list of instructions for cli Updating from open sack 5 all the way up to 6 and to the current 7 So the the process does exist. It's good. So again, one of those things that open sack itself is struggling with Yeah, so in summary Here we got the you know pros and cons everybody's got their positives and they're not so positives and Here's a list of a few of them that we noticed that we would consider pros. I think that we've done a pretty decent job of Defining some of the things that we could do better that Red hat could do better in terms of their deployment mechanisms In the same way as as fuel, you know, so there's a one-to-one kind of there Some of the things that you wanted to mention were in terms of the stackolytics Yeah, it was kind of interesting as morantis were trying to get fuel into the big tent of course, but just looking at the The the lion's share of the development of course is coming from morantis, but there but to be noted here Hopefully you can read it on the screen there. But the red hat is also submitting Contributing to the fuel project, which is nice and conversely, we you know, there isn't a director Tracking right now for for the deployment model So but looking at the triple o piece again red hat here is doing a lion's share of development on the triple o deployment model But we also have morantis Submitting into the triple o as well. So basically just to state that we're both working on each other's projects We're all trying the goal here is to make open stack Better all the right way. Yeah, this is our kumbaya moment guys. We all love each other right precisely so if You guys can kind of read those we're kind of on the end of our time So we definitely wanted to save a little bit for for any questions if you have We're also we know time is limited But we are going to be in the booth later today and tomorrow and this week So be sure to stop by and and let us know if you have any any observations that you guys have made that we actually wanted to Make sure you guys have Oh, we do have microphones in there as well. I think there's a microphone right there Standing stage stage left. Oh, perfect Hi, I'm nickel Actually, uh, I'm new to mirantas. Not much. I actually tried setting up on my pc just for poc. Yeah, the virtual deploy Exactly. So I see the mirantas. Whatever the processes you're running. They are run as docker containers Right Actually, the docker containers are the deployment mechanism running on fuel. They're not really a container containing the role They're the deployment mechanism for the role Fuel itself pushing to the environment So it's not it's you'll only see docker running on the fuel node on the fuel master node That's kind of a worker process Worker process the local process to the local process to the fuel master Pushing out to the node through the docker container But that's not to say that we can't do docker in the open-sec environment using for instance morano application catalog Can launch Kubernetes and then launching further. So then it would be docker hosted Running under open stack, right? Okay. So this is just my Initial what I saw the main problem what I faced like I was actually working behind a proxy network And I wasn't able to connect to the network for downloading the images. So yeah, so, you know when you start up fuel there is a little Splash screen and you hit the tab key there you have access to the Kernel parameters that are being passed to fuel One of those kernel parameters is proxy equals. Okay, and that's where you define it I haven't find any documentation for this. That's why it's kind of a question. Yeah, that's why I'm telling you Thank you. Thanks a lot. Yeah, we'll need to file a bug for that and Yep, more errata for the documentation Anyone there's a question back here We've got the microphone here. If you don't mind otherwise, we'll have to repeat your question Or have a hard time hearing How can I set up the manila in both of the systems? Manila. Oh manila. Okay. So so at the moment that's being developed in terms of our Engineering staff taking a look at it, but we haven't actually considered deploying it as part of fuel per se I'm sure that it can be as after the original deployment By a plug-in be placed inside of the open stack environment That's one of the keys of this kind of thing is that the plug-in architecture allows you to actually define another role That could be a manila Supported role within your environment and to that point I know both marantis and red hat are both investing into the development of manila and it will eventually come and be more More easily deployable in preview. Yeah, our our development efforts in preview. Yeah Thank you. Thank you. Any others? Yes, sir, get on the front And just a question for both systems you talked about Um one using pixie and it is an ironic is it possible in either case to take five a data center where I already have A deployment system that will deliver nodes to me and kind of start from there Or do I have to use pixie? Do I have to use ironic in each case? No, you don't have to use any of those those key elements However, you lose kind of the ability of fuel to then manage it for you So it's the day two stuff that you wouldn't have if you use some other mechanism to deploy all right Yeah, so so as long as we can Discover those Andrews an expert. He'll be able to help you Don't try this at home your results may vary Excellent anything else. Thank you everybody appreciate it