 So I'm going to talk about Fogwork. I've been doing this online, so that is probably more useful for the hub and footer and all that. I work for an engineer. They're kind enough to employ me to work on Fog full-time, doing that now since beginning of September I believe. So the amount of progress that I've been able to make in full-time is amazing compared to the bits and pieces of time I've spent together when I used to be a pet product. As has been mentioned a couple of times, we're doing a hackfest tonight. I'll definitely be there probably for the whole time. I can manage to stay away that long. It should be a lot of fun. Do you have any questions about this stuff? Maybe I just ask it. So first off, just to be clear, Cloud is a word that is thrown around a lot these days. So just to be clear what I'm talking about when I talk about Cloud, since it's something that I work with a lot, is that it's something that's eight-guider and it's on-demand. I think those are both key to at least do Cloud as I understand it or Cloud as I care about it perhaps. So some of the core things from that are doing compute or DNS or storage. But there's also some things that are kind of on the fringe that are starting to move into this territory. Things like key dog storage or low-gum storage where there's a few places where you can get them in this kind of way. But in a lot of cases, you're kind of on your own still in those areas. So to dig in a little bit more, it's on-demand. So you only pay for what you use. It's different from traditional hosting. For instance, where you pay for server months kind of instead of server hours or server minutes. So this is really nice, especially if you're working on that project every weekend. You can maybe only run your servers during the week then when you're working on it. Not pay for during the week when you didn't get a touch-up because you're doing your day job. And it's flexible. You can add and remove resources in minutes, not days or weeks or months as it would have been if you had to go and fill out a form somewhere and get somebody to go and pay for these things. And it's repeatable. You can set up so that a server spends up and some configuration happens and make sure they did the right thing and know with some confidence that you can do that. Again, tomorrow if you need one more server or the next day if you need three servers, so on and so forth. And from that you can build more resilient systems because if you don't think about the server necessarily always being there you can account for that and build systems where you can more easily add and remove resources as you need to. So that's great, but why am I worried, right? Like this all sounds very promising. And the problem is that if you quickly run into an option overload there's a lot of different providers, there's a lot of different services it can be very hard for you personally in any space to figure out, okay, do I want AWS? Do I want Rapskates? Why would I choose one or the other? What did they get me and how? Where did it cost? And along with that is this expertise, like a feature set side and you have to learn a whole different set of knowledge to work with these individual providers which can be very difficult and get a lot of upfront effort. And it also gets you locked in, it's very hard to switch once you've made the effort of spending five weeks deciding if you understand AWS and don't actually use it. So there are two rules, but when I was first coming into this I found that the two rules were vastly different from one another, frequently vastly different from the APIs that they were actually serving and the quality and maintenance of those was very, from the project, it was very difficult to tell they were still maintained. So I decided to start working on my own. The other thing too that a lot of people ask me about is standards. Standards is another way that we can possibly approach this. I think the standards process is good, but it's been slow and in many cases the standards end up getting interpreted in such vastly different ways that it's kind of a joke to say that you have a standard in first place. So my approach to dealing with this ended up being this thing called ball. And initially it started out pretty humbling. It just started working on a couple of little services because I wanted to learn them better. It didn't allow me to do low level stuff. It wasn't that fun to use. You still needed to have a lot of expert knowledge. And eventually, it built on top of that, it built abstractions and it got better and better. And eventually it got to the point where I was like, wow, I'm really on to something. And so why I think you're getting on to something is, first off, it's portable. If you use the higher level of distractions and move back and forth between providers, pretty straightforward. You get away from what the actual providers' APIs look like because it's difficult to have one API that is exactly the same matching. But if you use the higher level of distractions, you can move back and forth, which I think is pretty powerful. And there's a lot of different abstractions then that actually provide for this flexibility and power. It's pretty established now. I recently got over a thousand followers on GitHub, I have 85,000 downloads, a bunch of contributors and followers and I work with full time on it. So I certainly hope that that indicates that it's here to stay. And there's also some walking built-ins. There's a lot of cases where you can actually try out what would happen and work on building out exactly what your Norwegian workflow is without necessarily having to worry about explicitly spending real resources that you have to pay money for or that you have to wait for. At some point, you obviously have to do that, but this can get you a long way towards that when you're a student testing and trying to bootstrap and figure out what you're doing. And so also, I just wanted to mention, we already have a decent number of people more all the time that are using it, both products and libraries, which is very exciting. And so, first off, just a little interactive bit and make sure we're going to still wait after the boring definition is sparked, make sure we're on the same page thing. Just a quick show of hands for people that are using the cloud as I defined it in the event. So it's a pretty good number, and then if you can keep your hands up if you're doing it with fog already. So it's a much smaller number, but there's at least a few. It's not completely crushing to where you go, but you can just put it off there. So we'll work on that. So for the rest of you, because most of you didn't actually raise your hand for either, frequently what I get is, this stuff sounds great, but I don't actually have a use base for that. I don't need cloud on a daily basis. So for the purposes of this presentation, I'm actually going to give you a little make-believe purpose and walk through how you might actually go about using this stuff given that purpose. So the purpose is we're going to build an uptime site. We even have a tag one, because who wants the busted site? This is going to be our next big thing. We're going to work on it on the weekends. We're going to kick it out there. I hope we get enough people. We get 20,000 people in pretty soon, and that's our salary rate. It's going to be great. So first off, simply, we just need to do setup, so you can get install, or sugo if you're not using, don't have stuff set up such that it works great. So the first thing to do is just create a connection. So compute is one of the extractions that I mentioned. This represents all the different compute providers basically in an abstract way. So you pass some credentials in. This is what a credential set might look like. This is for Rackspace in particular. So we're saying the provider actually will use the Rackspace, and here are our Rackspace credentials. And we want to go to server. So first off, we say, okay, I'm going to call create server, and I'm going to pass these two prams. I'll explain more about that later. And get back the body of this thing, look at server, right? Of course, we can sit around and wait a little bit because we can't really do much with it until then. Then we're going to run some SSH commands. We're going to place our key so that we can SSHM. We're going to turn off root password because Rackspace hands you a root password back which is kind of bogus. And then we're going to start it up. So that's like super easy, right? I mean, no problem. You don't want to take me an hour and I need this stuff all of a sudden. I mean, it's not easy, right? This is exactly why I work, right? There's a lot of very specific things to do that's work. I don't expect that many of you are going to be able to pick up that much specific stuff in a short period of time at all. And this is your weekend project. You don't want to spend the first 10 weekends figuring out how to use plugs enough to build this site when you want to get off the ground, right? So, you know, there are these arguments. I don't know what they are. One in 49 and a half, who knows, right? And it's only going to work on Rackspace in this configuration because it's very specific to Rackspace. So, I mean, this was how I started out but I quickly realized that this was a disservice, really, right? I had tooling, but the tooling wasn't really gaining me very much, right? If I'm still back to square one and making these API calls more or less from scratch, I could use an HTTP and not really, you know, be in a better place. So, from there, I've made a lot of improvements so I'm going to give you a better idea of how that would work. So instead we can get this compute thing and call servers on it, which is a collection of servers that kind of represents all of your servers on that service and you can call bootstrap with a sort of server attributes. So, server attributes then, we can see that 49 is actually an identity. That's helpful. We now know that 49 is well, sort of, it's a bunch of people if that helps, but you at least know where that fits into the slots, right? We're going to say what the private and public you are so that it will be able to place it for us. And then, effectively, this will do everything in that 40 or so lines that I showed you before did it, but you did it in, if you wanted to get the hash in, that's one way to go. So, what is this service then, right? So, it's a collection. It's kind of like an LRM server thing. So, you can just call servers, it's a lazy loaded thing. So, you can call servers or servers not all you get back with this is what you have. You can call servers.get with a particular ID, you can just get that particular one back. You can just reload and say I know that you have a copy of what they did in this slide when I called this in the first place, but I have reason to suspect that it's changed over time, so please make sure you have the most reason. You can use a new. This creates a local version of the thing, so it doesn't actually say spend the server, but it just creates a local representation if you need to manipulate that and get things in order before you get off. And then there's create, which does the new that actually will kick the server off into the cloud as well. So, now what we need to do is actually get into our actual recording of all time together, right? So, we need to ping to do this. So, we're going to use the ssd command on the server that we created with a little bit of code. They call ping-c10 and target. So, target is going to be specified presumably by our customer, right? Maybe they want to check, this is mysideup.com, right? So, that's what the target is going to be. Ping-c10 will run ping 10 times and it's going to kind of aggregate result down at the bottom. So, we're just going to pull the standard out off of the first command that the ssd is staying right. And then, this is a comment to explain what I do next, which is kind of ugly. So, I could have used a redx or an rc expression grammar or something, but instead I just did some splits and stuff because I wanted a quick, easy example, right? So, the line 6 is what that result line looks like. So, I'm splitting the whole result set by new lines and just getting the last line. Splitting it by the right spaces and getting minus 2. So, the a slash b slash c slash d which is the main max out of the standard deviation. Split them by another slash and you have all the data. I know that was very complicated. I apologize. But then the important thing I think to note is that so far the most complex code in the world and hanging something and getting back aggregated status results, the most complex thing is actually the person next to you. Everything else is pretty easy. Sets and parameters need to go. So, the next thing is we're just going to clean up. So, I'll serve it up, store it, I'll shut it down so we don't have to worry about getting charged when we're over 4. And you can just throw the results that we just pulled into a hash to try and do some new provider. Because since we're doing uptime we might want to set it with something in US east of Amazon but also say put something in Rapspace Dallas because we want to go to multiple locations in case one particular backbone is down or something like that. So, going back to my previous example this is a diff of me taking that and changing it so they ran on Amazon instead. And you can see just how much similarity there is here between the two. I mean, the simple like naive low level script there's almost nothing that's the same. I mean it's funny, it's comical because the things that are the same are like largely coincidental like the one. It happens to have a one and you will less the same place in both scripts but I think it means something completely different. So, anyway we can go from that to to this, right? We use the same code we used before but we just set a different set of credentials and we set attributes for the server because there are some discrepancies between the writers. They look pretty similar though and we end up with a server again we can run the same SSH stuff everything else will just work. And from there also you can do simple stuff like taking that existing thing and just use and emerging in a different region for Amazon so that we can ping to from even more locations and have better coverage for our customers. So, from there the subsequent versions are pretty much just moving down the list of other supported providers and this is just a subset of them and you can just add them as you're needed and allow their instance to repeat. So, I mean, it's great but how did I even know what half this stuff was? How did I know that that ID was bunching and so on? So, thought comes in binary when you install it, if you run it immediately it will have something that looks kind of like this which says please create this file that has your credentials in it so that subsequent times when I do I'll actually know what you're dealing with so I can just load that up and you can go and do that. So, assuming we've done it we can get in and there are these kind of signups so the first thing that's going to happen is we'll say welcome to Blog Interactive and it'll let you know using default credentials up because you can specify your credentials and that you now have access to AWS because those credentials are available. So you can call providers so we can do that the same thing if you needed to do that. And then on Rackspace itself you can call collections this will tell you about things like servers but this will give you the whole list that's available on Rackspace. So this is somewhat longer than this. You can see also that it has directories and files which match to Rackspace storage instead of Rackspace compute. So these top-level things share the instructions everything basically that that space provides. If you need a specific one you can say I just want the compute provider and that looks like that. You just get back to the new provider and you can also ask the new provider what the request it supports. This gets down to the low-level nasty stuff that I showed you at the beginning. It is low-level nasty but occasionally you want to do something on a provider that's very specialized to that provider that the abstraction probably doesn't provide but you really need it. So all that's available and this can give you back the list of all of those things that have been implemented to kind of go top-down and what are all these analysis that I keep using. So providers are like Amazon, Rackspace or Zorigo for instance they're kind of top-level it's a company that provides services but it's not the services themselves. The services themselves are EC2 which is Amazon's compute or Rackspace servers which is Rackspace's compute or S3 which is Amazon's storage and so right now many top-level ones are compute, DNS storage and there may be additional ones as kind of demand comes up. So then there are collections. This is like favors and using servers and then those contain models that's the individual ones of those and then there are requests and so the questions and models are all those on top of requests the same as if you were going to do that low-level stuff so just briefly the repository saw since I kind of blazed over that before you can call, say, list servers you get back an X-con response which basically has a body headers and a status so if you know you should use stuff pretty simple and get it from that what we need and so the next thing is to do a sanity check we'll go back through and basically say give me all the servers that are currently ready which is equivalent to checking if it's running or checking if it's active or whatever, different from service to service but it means the machine is up, right? So just check for all of our running servers on both of these services to make sure we didn't forget any so we don't have to have a huge bill because I've literally left 20 Amazon servers running overnight before without realizing it and then we woke up the next day and I was like I'm almost as sad as my credit card as sad but thankfully there were smalls that helped it was still like 70 bucks for like 12 hours so I'm the lane so the next thing was also finding those images how did I know that 49 has gone to so there's a helpful thing inside of here on the collection called the table if you call table without any arguments it will just give you a nice kind of like table in the form I can expect from my school stuff in the console kind of but you can also specify in this case I'm saying I only want the ID and name columns because it'll wrap and look ugly and I won't be able to read it anyway we've got something like this this one that's all the other images just to show you an example and then for any of the other images I actually just cheated there's elastic.com that's sort of the official one too and we just managed to use that because the actual this information is so enormous that it's very difficult to find anything in the first place so the thing about exploring this way though is that it's kind of slow it can be kind of expensive so this kind of segues into the mocking stuff that I was talking about so it's pretty simple to use you can either call the binary with the environment where the fog mocking was true which will move that into the mock mode or you can require a fog and then call the fog before you start making calls it's meant to be a simulation so in most cases things will just work there may be some edge cases but for the most part things work really well for most people and if a mock isn't implemented rather than kind of working or something it's set up such that it will actually raise explicit errors so you know immediately maybe I either should add a mock for it or file a bug that I need a mock for it or so on and so forth so you don't have to hopefully run off track where the mock just has some weird behavior that doesn't matter at all and I actually run the tests against both real and mock modes there are some exceptions we're at any test and stuff but for the most part the specs can be worse on both so if you find an error you need to add a new test to make sure that that edge case doesn't appear or recur or whatever I think it will be so this aside we're back to business right you have all this data from your pain and now you want to do something with it because you don't want to just have this data so you're going to be pretty ticked off so we're going to talk about aggregating the cities and plot storage so this is probably someone familiar who connects to a storage provider here is a traditional set for a connected test 3 and then within the storage providers the top level is directory so this would map the Amazon to buckets but it maps to containers on Rackspace files and some other things basically so we're going to take this directory and we're going to give it a name and we're going to say that it's going to be public and then we're going to score a file into it so we just take that directory and say we're going to create a file give it a body we use file that will open like this it will actually stream the content as opposed to file that will read which will pull all of it into memory before doing it so we're going to create this directory into memory before doing it and then we set the key set that this is public as well unfortunately we have to do it at both levels just because some providers have discrepancies there sometimes you can set it at the bucket level sometimes you can only set it at the object level sometimes you need to set it above in any case, since we are using public we can actually call it public URL and for the services that support it we'll get back a URL that we can give directly to our customer that we can give directly to their data without ever getting our servers which we really need so from there we'll talk about geosourge similar to your geotainting so we can also just use this credential set to set it and have the same go-work for Raxway storage and finally we'll do our cleanup so we're just going to grab those directories, destroy the files that are inside of them and destroy the directories themselves and again, there's more providers it's a short list for storage not as many people have been in it but it's pretty much a lot of the recipe so our final phase then is profit so we have our thing, it works but we want to provide a freemium law so if you have an open source site or something you can use geotainting for free maybe but we're going to be able to charge people for something so what we're going to charge people for we're going to use DNS to provide a special subdomain to user or customers where they can go to their customers to see what the uptime looks like so we can actually have this look pretty similar we have some credential set we're using it as a rego this time and you'll create a zone with the main name and the email address for the administrator and then inside of that we'll create a record it has the IP what our subdomain is going to be it's an A type record and then our customer could be wired up with this DNS to disentangle that sort of on the other end but it should work for you and then we'll clean up similar to the other ones, you get this I don't need the recorders, you destroy stuff etc etc in a lot of cases these things will require you to destroy what's inside of something before you can destroy the parent that's why I destroyed all the files and why I destroyed all the records so then geofremium this should work on all the different DNS providers that's just a subset of them I haven't seen that since then it just lathers and repeats one of the key things I wanted to focus on is that all of these services once you learn one of them it's pretty simple to move to another one and see how those pieces fit together it's a lot of shared stuff so congratulations so now you just need to copy and paste and push and deploy objects and all this stuff so you can all start your app time businesses and maybe cut through if we have that many of them we have customers and as long as we have enough to do it so now you just need to find ways to spend this PowerPoint that I've just given to you I like coffee and bourbon and games for instance just throwing that out right now and in the entire year at least it meets so I hope that this works for some of them, it doesn't seem to work for me so this gets to the love part I talked about why I was worried it's difficult for me coming into this why it's probably difficult for many of you to come into this I mean I was suffering through trying to figure this out so I ended up with this thing I distributed it I hope that many of you will use it it was suffering being coded in Ruby actually I kid it felt a lot like suffering long ways and times but it's really empowering I get this comment a lot from people that it's easier or I didn't know how to do this stuff before now I can just spin up a server when I have some little poppy thing and it will just work like it's great and that's really exciting to really empower other people but also to feel empowered myself I'm not that much of an assistant then I managed some stuff on Slicehost through the web UI basically was the extent of my knowledge and so to be able to say I want to do some computation on a server and I have one and I've run some SSH commands on it but otherwise it's just like awesome it's really exciting for me this cutting edge stuff cloud is I think a future in a lot of ways it's taking some people some time to catch up but I'm hoping that this lowers the barrier to entry enough that the people in this room at least can kind of jump on board and get a leg up kind of on a competition and get out there and do the next set of really exciting things so now I'm going to turn it off so first off if you follow the follow I announced releases there so you can see when this stuff comes out or if things might get deprecated or other things kind of just keep a general line when it's pretty quiet so it doesn't hurt too much to follow you can also follow a GitHub repo if you want to get a little bit more granular in the detail you're getting back you can probably just play as stickers they look like this there's someone on the top here I've piled them in my bag over there so feel free to grab me or find me in the HackFest or later and you can ask me any or any questions you have HackFest would be a good time for it which is probably not really a lot of time into the talk and I have a few carbonated stuff like me if anybody wants to just hang out and play instead of talking tech like I'd love to so I'm going to have a notch normal mark right so you can report issues on the repo issues that also add are actually graded so if it doesn't have a grade next to it if it doesn't have a grade next to it you should probably skip it if that's a grade I've reviewed it and it had a commentary on how you can do it to make it easier there's an IRC channel there's a Google group you can write blog posts or give lightning talks I record those with blue t-shirts the art homework is callworkingonfog.io for giving information if you can come and contribute and the expert homework is to help maintain the positive services you depend on and to become a collaborator and actually get to commit it when you get to Blackshirt and I'm the only one that has one now I'd love to give somebody else one but I haven't had anybody step up to the level where I felt comfortable doing that Blackshirt so I just want to say thanks the examples the code from the top are there the slides are there and it's the repo you can bug me on Twitter or get over with us so thank you very much