 I guess I need to queue this up. Welcome back everybody. I think most of you were here for the workshop we just finished. If you do not have a handout for this second workshop, please raise your hand and we'll make sure you get one. This one here, thank you. The previous workshop we just got finished up with was installing Swift, the pure open source, open stack, I mean implementation of Swift, I mean actually very, very recent code, something you can all do right now. But you may have noticed it was a little tricky and so we are going to have a second workshop right now following onto that, installing Swift and looking at some of the management functionality that Swift stack and the Swift stack controller brings you. As a way of introduction my name is John Dickinson, this is Joe Arnold and Hugo here will be driving on the laptop. And we do have books and it even covers what we've talked about in the past workshop in more detail about all of the steps to configure and set up and run Swift. So if we did go through, blow through things pretty quickly in the workshop, but this covers it a bit more step by step and in more depth. We have ten books left. Okay, so in this workshop we're going to walk through the process of setting up Swift using some deployment automation with Swift stack. So there's a couple of things on the handout. Swift is a new rack space instance. Again, they didn't like us creating a lot of EBS or block volumes on their environment. But there's a unique one on every single handout with a sticker there. There's also a login for, and Hugo will show it, for the platform.swiftstack.com. So we've created temporary accounts right now for, that you can use to log into the platform. So both of these things are running in the cloud right now, which is for demonstration purposes. And when you do your own deployment, of course, all this stuff exists in your own infrastructure. Question? Yeah. No, so here's the, so there's two, so demo, your username and password for, so that is for the SSH. So yeah, that's a good question. So on here there's a VM credentials. So that's the virtual machine instance that we have running in the cloud. And so you'd log SSH into the IP address and use the username and password for that SSH instance. And then the platform credentials would be the website on the top there. So you need an SSH, so you need to open up an SSH terminal to log into your, yeah, you need to get SSH to log into that IP address. And Clay can probably help them out. If there are any questions during the whole thing, feel free to raise your hand. And one of us will come by and help you out. So the first, so we'll be operating in two windows while we do this. One is the web browser with the controller, which is what Hugo has here. And the other window will be an SSH terminal on the node itself. So we can walk through the commands there. So the first thing we do is we need to name the cluster. And this is just a canonical, just a name that you give, that you're going to address the cluster as, just for labeling purposes. And then there's an option for a load balancer. Well, we're only going to be using one instance here, so leave it unchecked. But if you check this box, what happens is you would put in a virtual IP that you have, that can be used by, that is on the same subnet as all of the nodes in the cluster. In this example, what we're going to do is we're going to put the IP address of the virtual machine instance that we have. It's on the front sticker on the bottom. And that's the IP address that you'll want to put in there. And what this is going to be used for is it's going to be included in the authentication, so when an authentication response gets handed back, that it knows what IP should be used in there. Yeah, so do not check the load balancer here. So then the next one, it would be a host name and environment where you would have DNS configured to map to a virtual IP. That's the host name that you'd put in here. And those are the cluster configuration settings. Yeah, you can put any name you want. Yeah, just call it whatever you want. In the name at the top, yeah. Not in the host name. Not in the host name. Yeah, don't. Leave the host name blank. Yeah, leave the host name blank in this example because we haven't set up DNS for any of these environments. And then click create. Yeah, uncheck the load balancer and then hit save. So the next thing is let's log into the instance and we're going to do the Swift install. So there's a command to run and Hugo will be running this command. So it's just, we're going to curl a, we're going to grab a bunch of commands, that's a lot of commands to run from this URL. So it's going to be platform, httbs, platform.swiftac.com, slash install underscore a bunch too. And that's just going to spit out a bunch of commands. And we're just going to do this so we can see what's happening. You can pipe this into bash if you really trust us. But as administrators, I wouldn't recommend trusting anyone all that much. So spit them out and you can see what we're actually going to run. And then we can pipe it to bash and execute them all. Now if you're running CentOS or Red Hat, then this Ubuntu gets replaced with that operating system. We don't support Susie, no. So Red Hat, CentOS, and Ubuntu are the three that are supported. And so if you trust it, don't forget the pseudo, pipe that whole thing to bash. And the steps that we're doing here, most of this here is about adding a trust, adding a key to the package archive to the PPA. That's mostly what the first few commands are doing. And then next, we are doing an appget install. And after that appget install, it happens. We're registering this node or this environment with the platform, with, into the controller. And the controller is going to phone home and give us back a URL of a unique identifier for that node. And so that's the steps that's happening here. I'll illustrate a little bit here. So the node, what it's going to be doing, and what we're going to walk through next here, as this is installing, is that the node registers itself with the controller. It will set up a VPN session. And then what the controller can do is it can do the ring building, like we did in the last workshop, and push that configuration out to the node. So that's what's happening here. And if you see what Hugo did, there is that URL, and that's the URL you want to follow to do the next step. Oh, it's saying it's not found? So this didn't happen for some reason. We can get some help for him. So what happened is it tried to run that command, and then maybe the rest of the installation didn't happen. So maybe we had a glitch in the install process. Yeah, so you follow that. And this is what you'll see. And eventually it'll turn, it should turn green. Yep, and so it's going, yep, it's sitting there. It's timed out, timed out, it takes a while to establish a VPN session between two points. So that's what it's doing. And eventually you should see something green happen. Oh yeah, yeah, so log in, log in, log in right here. So log into that, there's a sticker on the front page. So from SSH? Yeah, SSH into that IP address. So now when you go to your SSH session, follow that URL right there. So yeah, you'll see at the very bottom of the output after the package install, you'll see that claim URL. How many people have gotten to this step? Awesome. Great. So the next thing, and you'll see Hugo do this live, is he'll say add this node to the cluster. And there's a few drop downs here that will say which cluster do you want. There's only one. And it'll be the cluster that you just created, zone, new zone, since there's no devices yet. But if you're, when you're adding new devices, you'll be able to say this device lives in this zone and shortly we'll have region that just got added for Grizzly. So that's coming up soon. And then there's two interfaces. In Swift, we didn't talk about this in the previous one but it's worth talking about here. There's a front-facing network which the clients interact with the proxy servers. And that's what we call the front-facing network. Then there's a internal storage network, which is a private network to that cluster. And so this is where you set each one of those. And in this example, Hugo, where do we set these two? And do they both get set to the same thing, or do we set? I picked it up on its own. Okay. Okay, good. That's good. That's right. Okay. Ease 0 is your outward-facing, Ease 1 is the cluster-facing, or the private IP space. Yeah, sorry, I didn't catch that up. So it'd be a 10-dot or 192.168 or something. Well, if you're a configuration, this is a pretty default way to set things up. And so that's our default. But if you have something different, you might have different interfaces on there. It's not uncommon to have a management interface, a, you know, one for different networks that are on the system. So those will all get picked up. And then those are available as a drop-down to be selected. For default, we assume, yeah, correct, Ease 1 is the public-facing one. Yes, it is. Yes, it is. Well, this is running. We just, we could have spun up environments anywhere. We just happened to do it at Rackspace. We could have done this on EC2, you know, HP Cloud, it didn't really, it was our arbitrary where we spun it up. Yeah. Yeah, Hugo, do you want to scroll down a little bit? So this is what, if you wanted to, if you were creating a, for convenience, if you wanted to add this node to a new cluster, then you would put in the new cluster there and create the cluster. That's all. Yeah, leave the blank, leave that blank below and click Add Node to Cluster. So that's the next thing you want to do. Hugo? Yeah, yeah. Yeah, that's another good question. So, it goes down to how the data placement happens in Swift. Repeat the question, guys. We're just going to uniquely place data across the three drives we have, and there's no minimum requirement in Swift. Oh yeah, the question was, hey, we're just installing one node. Don't you require three nodes? And you know, the answer is, well, to have durability, the minimum requirement is to have at least three drives. It's very nice to have more durability by having more nodes, but it's not necessarily a requirement. And that's what we're doing here just because we're not wanting to spin up a bunch of instances for all the workshop here. Yeah, yeah, so this can be, so the question was, can this be done in a private environment? And the answer is yes. So we just are doing this on public cloud instances for the purposes of the workshop. And all of the people who are running this, this is a private environment on physical hardware with their own systems that they're managing. And what we're using right now is a controller that is hosted as a service. And we have people who are running this as a service, but that environment can also be picked up and brought in an in-house environment as well. So this is a really good way to run a workshop, but it's not necessarily exactly how environments are set up. Okay, so the next thing that we're going to do is we'll have that node will appear in yellow, and we're going to click on that provision node. The yellow means that it's not ready to go. We haven't told the cluster, hey, here's what I want to do with each device. In this case, now you need to tell the controller, here's what we need to do with this server. So when you click on the provision, you're going to come up with the node management, node configuration page. And you need to assign this into a zone, you see Hugo just set this as zone one, which since we're dealing with this in the workshop situation, this node is in its own unique failure domain, and there is no other one, so we're just going to do node one. And then we need to start looking at the drives. So you'll see that the controller has automatically detected all of the drives that are on that system. And so we need to now tell it which ones are going to be used for account container data and which ones are going to be used for object data. And first we're going to format. So what Hugo did was he set the, can you scroll down just a little bit more, Hugo? There is XVSDC, which is the boot partition. Swap, we don't want to add the swap into our cluster as data. So don't do that. So there's a check box here that says ignore and check that. And then the next step would be to format the drives. And you can click either one of these format. And that will go through the process of formatting it. So that's the first step is to format the drives. Just like in the workshop where you did an XFS format, that's the activity that we do here. Yeah, so now we're, yeah, go for it. So notice that there are two options here. You can add your data, you can add these drives as new drives either immediately right now or over time in a gradual sense. And the difference here, if you remember back from the first workshop, if you were here, it is how we add in the nodes into the cluster. So if we add them all at once, what this means is this data is immediately available to the cluster. But you're going to have potentially a replication storm as all of the data is rebalanced out to immediately try to fill up this new thing. Most of the time, especially in production clusters that have customers interacting with them right now, you don't want to do that. Because that's going to potentially adversely impact your actual customer traffic and you don't want simply a standard operational procedure to negatively impact your uptime area SLAs or things like that that you have with your customers. So in that case, what we have here is the option to automatically add it gradually. If you remember from the first workshop, each node within Swift has the concept of a weight. And this weight is basically how much data relative to the other drives it's going to be getting of the overall data capacity. And so behind the scenes, the add gradually will be used for setting it to a small weight and making sure things are rebalanced and migrated appropriately. Then it increments it by a small amount and continues to do so, allowing you to smooth out that data migration within your cluster over a period of time so that you don't have to worry about saturating your network. You don't have to worry about adversely impacting your clients or anything like that. So also, the other thing you can do is if you notice a drive or a node wigging out and throwing out some errors and you want to decom it, what you can do is you can click remove gradually. And yeah, it may be limping along, but you can just see that data out over the course of a few days and then swap it out. So that's the other thing that's pretty useful here. And this becomes exceptionally useful when you start dealing with larger nodes which have 24, 36, 48, 3T drives behind them. You need to drain those things gradually. So you can decommission hardware as you need to gradually. Gradually over in our case is- We do two terabytes an hour by default. We do a fixed rate right now. So there's enhancements there that we can make it more dynamic, like more aggressive at night, for example. Yeah, based on utilization. Yeah, correct. Where is the proxy server running? In this case, the proxy server's running on all of these nodes as well. So you've got the storage servers and you've got the proxy servers all running at the same point. And in this particular case for this workshop, it's one node, everything's running everywhere. And we, go ahead. So if you wanted to have a node be a proxy, essentially what that means is you have a node without any data drives on it. So it's just nothing is allocated for it. And then there's a way to say, don't add storage nodes into the load balancing group. Now in this case, we've got some brand new drives with a brand new cluster. So essentially all you need to do is click the add now for your drives. And this will immediately add all of them so they're all immediately available. And your data migration of zero bytes will happen instantaneously. And now we're good to go. So you'll see that once you do that, the status changes to in use on, you see it's in the orange box here on the screen. And so these drives are ready to be used. Here you go. So now we need to enable the node. So if you scroll down at the bottom there, you can see that you've got the option to configure your networks appropriately, but it should be set correctly from our previous step when you added it into the cluster. And we're going to click on enable node. So now if you can, hang on a second, here you go. Okay, so once you do that, you're going to come back and you're going to see that the node has been enabled instead of being yellow. It is now green and so you can see that this node is ready to be used. But we're not quite ready yet because we've got some other interesting things we're going to enable for the entire cluster. So to start with, we're first going to enable some different middleware. So we'll give you just a nice easy point and click interface to enable or disable middleware. So at the bottom there, you can click on the enable middleware button. And that'll get you to this screen here. So in this case, for the purposes of this workshop today, I want you to click on three of them. You want to enable the Swift web console, temp URL, and form post. And what this is going to allow us to do a little bit later is actually browse the contents of the Swift cluster from the Swift cluster itself with the web console and also upload and download data through that web console. So once those are done. Let's make sure. Is everybody done? Yeah, so there's just a walkthrough really quickly here. There's a few things that you can enable. There's actually, there's a bunch of different authentication setup on here. So we have Keystone off. So that is used for if you have a Keystone system already set up. What you can do, and click on that to go really quick. Yeah, on Keystone? You can enter in information about your Keystone setup. And in that way, you can use that Keystone system to authenticate. And then carry that token that it's going to give you and hand it to the Swift cluster. And then the authorization will happen when this is enabled. So in, yeah, they're two complementary pieces. They need to do both of them, actually. It happens to be they're implemented on the back end. And so that's exposed there right now. Yeah. But it's, they're part of this. They're two halves of the same coin. Correct. So Tempoth is a, it ships with Swift. It's useful for purposes of doing development work. However, there's a few downsides that it's worth pointing out. One is that the passwords, the keys are stored in plain text. And then because where it's configured is in the Swift.conf file. That means you need to issue a proxy restart to have new accounts take hold. So those are the two caveats of using Tempoth. The pros are, if you don't have a lot of dynasism in your, in how many accounts you have on the system, it's really fast. How did we decide, we picked a set of defaults that we knew our customers are gonna be successful at. And even here, we only recently, even recently added Keystone. Because the Grizzly support for it is good. And it's ready, it's ready for production now. So Swift, we probably should have baselineed. The question is, is this block storage too? And I guess I should have baselineed this whole, both this session and this session on Swift is an object storage system. And that's all it does, and it does it exceptionally well. And that's what it's focused on. And so we're trying to represent and store objects really, really well. And do so in a way that is highly available, very durable, and has, can support many, many transactions happening all on the same system. Right, so the comment is that there's, within OpenStack, there's a lot of documentation that has to do with this kind of storage and other things. And generally my answer is that, yes, we've got a broad suite of products and projects within the OpenStack ecosystem, all of which are solving particular use cases. Swift is solving the object storage use case and is different from, say, Cinder, which is focused on block storage and computer storage for compute. So the next thing we're gonna do is we're gonna tie into the account creation, we put a web UI for account creation here. And so when you go back to the cluster manage page, you can click on, go ahead, Hugo, click on the managed storage accounts. And so that's what we'll be doing next. And here, create a username, create a password. And that will get put, when we click the next step, that'll get pushed out to the cluster. So add an account here, and then when we add an account, we'll need to do a push, we need to push a new configuration out to the cluster. If you're only changing accounts, it'll just do a lightweight push out. If there are pending changes to be made with the ring, we're gonna do ring where you build. And so push the button, and we're going to build a ring, or the platform's gonna build a ring and push that configuration out to that node. And this takes a few minutes because we're doing a much bigger ring than we did in the previous workshop. We actually sized that down, so it would go fast. The part power that we set by default is 18, and it's a configurable setting on a cluster. So if we're doing installs, then we can tune that. Two to the 18th is the number of partitions that will be created in this example. So are people waiting now, they've clicked push configs and you've got the blue blocks on your screen. You're waiting for that to be pushed. If anybody has any problems at this point, just go ahead and raise your hand please. Yeah, you can log into the node and you can tail. So there's gonna be a few logs that you can look at. There's gonna be the Swift log. So where's Daryl Varlog Swift? All log, right? Varlog Swift all log, which will show what's happening on Swift. And then for, cuz you haven't done anything yet. And if you wanna see what's happening, the communication, that's gonna be opt SSLog, the SSNode Varlog. SSNodeD.log. SSNodeD.log, and there you'll see the interactions happening between the controller and the cluster. If you're wanting to log in directly to your machine, you can use the credentials on your workshop. But yes, the credentials that you created in the controller are what you use for communicating with your Swift cluster. Which is soon, we're gonna do that as soon as we've got people with the configuration push. Yes, how long is it supposed to take? Well, we're all doing it all at once. So what's happening on this particular controller, is we have set up a thread pool. And each one has a core, and is taking each one in turns. So we have some lucky people up in the front. And as the pool gets worked through, and I think we're running 16. How many running, Daryl? Eight? So be patient. So it'll get through, and Daryl is watching them go through right now. So what's happening is, in queue, a new ring is getting built based on the information that you told it to do, plus what we read in about the node. We've formatted the devices. We've given each device a unique label using a UUID. And now a ring's being built, so we're pushing the configuration out. And so if you put add gradually, we got another winner over here. If you put add gradually, it would have built a partially weighted ring for those devices, and then on schedule every hour, a new ring would be built where the weight on each device increases a little bit more, and then that ring will get pushed out. Well, you're almost done. Correct, correct, correct. So each node establishes its own VPN credentials. So back to this picture over here. The node doesn't need to open up any ports to talk to the controller. It is initiating the session from the node out to the controller that's supposed to manage it. And then we use 0MQ to establish a message bus between the node and the controller, and then that's what commands are sent over. So when a new ring is available, it'll say go fetch a new ring. Here's its location. It will pull down the ring, and then that ring will be checks on to ensure it's consistent across all of the nodes. And then we'll stage it, and then flop over to that new ring. And so the way that Swift, what it looks like to me, if I'm gonna create a, how I think about it is Swift has a few services. You have the routing and access and authentication layers to it. And that's represented by the proxy server, started by the authentication services. And it's routing the requests to where the data is located. And then below that we have storage intelligence, I don't know what to call it, but something that is each node has certain behaviors. And the ring provides the roadmap of how to behave and how to replicate data around. And how consistency checking is happening, how replication works. And then there's the physical hardware itself that is actually storing the data. And then the controller, because each node in the system really isn't aware of the other nodes, you're just telling it to behave in a particular way. It doesn't, each node doesn't have global knowledge about every node in the cluster. I mean, that's how distributed system has to work in order to scale. And so what the controller does is it has that, it's taking in the data about the health of each node. So if a drive is failing, it can pull in, say, there's an error in a log file. We have smart data that gets picked up. If the disk becomes unmounted, all of those are signals that something's up with the drive, and then that will get picked up and, say, converted into an alert. If you're adding capacity or decommissioning equipment, then that's worldview of a distributed system. That's in a controller so that configs can be pushed out to the nodes. Yeah, so the only thing that the controller is all out of band, right? So it's only getting a tiny trickle, like you don't even need to, you can have an acoustic coupler and send the amount of data that's going back and forth between the controller and the node. Because if you think about it, the ring is a relatively small file. The data points for monitoring are relatively low amounts of data. It's small kilobits per second amount of monitoring data. So what we're waiting for is that builder file, because we're using production-sized builder files, to distill down into a ring. And- For 100 people. For 100 people, all at once. So we set up an environment where we have a queue of eight workers that are just queuing them up. So it takes about four minutes per to do. And so we had to kind of do the back of envelope map on what we wanted to set up for the workshop to get through. So that's what's happening, because we're building that ring. And then when the ring's ready, then it gets pushed out to the node. How many people have it finished so far? Have. Have. Good. So we have, yeah, it kind of makes sense. There's about half the room now, as it has a figure dot. Or has it, it's in queue, I've gotten to those particular nodes. We should have said it was a race. Yeah, I think, looking at time though, just once, if you do have that done already, one thing you will be able to do is, oh look, Hugo has it done just now. See, are we going to the web console? Yeah. So we could play with the Swift overall, but just to quickly give you a nice little thing. Remember, we enabled the web console middleware? So now if you go to your IP address, the one that's listed on the front of your workshop, in a browser, on your host machine, on your laptop, then slash console, then you will be able to see a nice white-level console. Oh, and there's a link on the website. Right there. It's something we wrote, yeah. But it's also a fairly white-level label, so you can do what you need to with it. And also that is taking advantage of some nice JavaScript HTML5 and stuff like that, so you can drag and drop, upload a new object into your Swift cluster. Yeah, so all those features, go ahead, start with the question. This is bad. We'll contact support, here he comes. So the way the client is built, it's using the features that are built into Swift. So it's a JavaScript library that speaks the Swift API. And it uses the features like form posts, like temp URL, so that you can from the browser directly post into the Swift cluster. And upload cat pictures. Yeah, yeah, we'll try it again, demo, yeah, yeah, go back, that should be good. Yeah, we built by the, yeah, let's talk after the workshop on this. We have the new drag and drop file on to there. So it looks like we're gonna be heading up into the time limits here. And we're about five minutes over already. Into a break. Oh, into a break, okay. So what we're gonna do is we're gonna thank you and end the workshop here. And then if you have further questions, what we'll be here as long as we can. But I think there's gonna be another workshop coming in. So we'll filter out into the hallway and answer any other questions that you may have. Okay, and one last point, last point. We have, I think it looks like five or six books up here. And we have some t-shirts by the door. Not enough for everybody, but just first come, first serve. So sorry, guys, sitting over here. Congratulations over here. Thank you very much. Thank you very much, all right.