 How do I sound? Am I coming in hot? Am I coming in hot? You're looking hot though between the the tie-dye and the sunglasses going on over there. You know this shirt actually so every year we go on vacation to Myrtle Beach, right? And I don't know when they started doing it, but it became a little bit of a tradition that they would tie-dye shirts while we were there. And then once Emily figured out how to do the what do you call it, like the vinyl transfer stuff, she started putting like logos on them and that kind of thing. And so this is this year's Myrtle Beach t-shirt. And it's a little bit strong for my taste. However, my daughter is in the office with me today and she's wearing her shirt and she wanted to match. And so she asked if I would wear my shirt today and I thought, you know, that makes a lot of sense. It does. Yeah. Not only is it very fatherly, but there's not too many family vacations that they have their own custom t-shirt. No. That's when you know you've arrived. That is pretty impressive. If I can build and install my own pacemaker, I can make a vacation t-shirt for my family. Absolutely. Absolutely. They voted on the logo and everything. And you know, it's appropriate because I was saying I feel a little bit like I've been on vacation. It's been a nice relaxing weekend. Absolutely. You know. Did anything happen in this weekend? I was kind of checked out. I got a new dog, the whole thing. I didn't even check in. Did you do anything this weekend? You know, I'm taking that one of those time management approaches where I refuse to check Slack over the weekend. It's worked really, really well. Our numbers may go down on Zendes just a little bit, but that's the way it's going. A little bit. Well, luckily you and I were on the duty this weekend because the transfer, the last transfer to Reliable Site, we'll get to that, was not clean. From Reliable Site. Not too. From Reliable Site. Yeah. Was a pain in the ass. No. And you know, I guess we should give a little bit of backstory here. So we started with Reliable Site, right? Like that was the first company that we went with when we started the company in 2013 and bought a single server. You know, and it was just one dedicated server. Then the reasoning for that being several, you know, one of them is that it was very cost effective. I'm trying to remember, you know, it was probably like 150 bucks a month or something like that. I don't remember what it ended up costing, but you could buy a dedicated server. It had a decent amount of storage space and, you know, it met the needs at the time in 2013. At the time, you know, a lot of the virtual offerings really struggled in terms of storage space, I would say. It wasn't necessarily, I don't know that Digital Ocean was around. I think, yeah, they were around. They just started in that year. I think it just had SSD drives, but they were really small storage. Well, that's the thing. Yeah. So them and Linode and all these other places that were doing virtual private servers and, you know, virtual obviously was appealing even back then. But the issue being, you know, if you got 40 gigs of hard drive space, that doesn't last very long. And we even started. Do you remember like our original plans were like only one gigabyte? And we didn't have, we didn't have a faculty plan. We didn't have an org plan. It was like all you got was one gig of space. No, we were offering free hosting with the domain. So we could, Yeah, but even then it was like, okay, storage is kind of like, you know, there were a lot of other cheap hosting providers that were saying, hey, unlimited storage. And we were just sort of like, no, we're not going to go that far when, you know, we do recognize that there are some limits on this stuff. And so, you know, we had that one dedicated server. You know, I think that expanded in, you know, probably early spring to our second server and things just grew from there. But I would say at least for the next two, two and a half years at least, we were buying dedicated servers every time we needed one. Now we weren't growing quite as fast as we do today. I think anyway, you know, we set up more servers today on a week by week basis than back then it was sort of like, okay, new semester, do we need one new server? Whereas now it's sort of like, you know, from one week to the next, it's like we might need a new server for any specific project or things like that. And so, you know. Well, it's also, yeah, go ahead. No, but it's also the idea that, you know, we're deploying a lot of servers, but we had this relationship with Reliable Site, you know, pretty consistently for two and a half years. I think our last school that we did on Reliable Site was UMW. And then we just moved them, but that was the last one where it was dedicated to three servers. And then we had a couple, but like, we basically, when you said, look, Digital Ocean now offers block storage. It opens up our opportunity to host, you know, shared hosting and school sites on Digital Ocean. We went full turkey. So we started around 2016. And just this weekend, and like we said, the Reliable Site, the final two servers were Beat Happening and Joy Division. They had a lot of users who had been with us from almost the beginning. And they've probably been through a migration to two before. And, you know, it was just a heavy load. And it took us all weekend to transfer 660 sites, I mean, or accounts. It was insane how long it took. Yeah. And it's a struggle because on one hand, you know, as accounts are coming over, they're coming back online on the new server. And then they're taking in traffic and doing their thing. And so the server almost has to function as a normal hosting server in addition to the load of continuing to migrate accounts over. And I would say these were, you know, these were some legacy, you know, servers that we were moving over. And so the people who've stuck around with us are not typically the kind of folks who have a single blog or something. You know, if you've had a hosting account for several years now, you probably have a lot of cruft. I know I was a perfect example of that. My account was on Beat Happening and got moved over. And, you know, I've probably got maybe 10 installs, maybe less, but, you know, I've played with all manner of stuff. Some stuff's still active. Some of the system and that kind of thing. And so, you know, even my account wasn't, you know, a complete easy one to move over by any means. And I think a lot of them were that way, where you had folks running either large installs, had a lot of storage, or just a lot of different stuff going on. Yeah, and it was gigs of stuff. Usually you'll get like a 200, 300 megabyte site, and it goes, oh, this was like five gigs. And they would get, like, Cpanel, it's interesting because when it hits a lot, it slows down on the transfers. And some of them just bottleneck it. And so we were having to go in, kill transfers, redo them. And that was basically the weekend. That's what took so much time is trying to monitor when stuff is going through, when it's not trying to. And then the load on the server was hitting like 30 or 40. It was like, what? But so, how did we fix this, right? And this is the beauty of sort of why we're going down this road is, you know, one of the limitations, you know, necessarily, I mean, we've had some issues in the past, I would say, with reliable site in terms of the way they supported things, for sure. No doubt about that. You know, no doubt. But with any sort of unmanaged hosting provider, there's going to be that tenuous relationship where, you know, some of the onus is on us to fix our things versus what's on the, at the data center level for them to fix and that kind of stuff. But the real hard limitation that wasn't going to get passed with dedicated servers was that what you bought at that time was what you got. In terms of storage, in terms of CPU, in terms of memory, it was a dedicated server. If I needed to add more storage, not only would the server have to be completely offline, but we're not talking for, you know, an hour or two. We're talking a support ticket in, dedicated downtime for somebody to put in more memory or to change out the hard drives, and the servers, they're just computers. They had like four hard drive bays. They were all in use with our system. So there was really no way for us to make upgrades over time. There was no way to expand disk storage or make them faster. And so obviously when you start, when you create your first server in 2013, you know, technology moves pretty quickly. And so, you know, what we found ourselves having to do is say, okay, well, if we want a better server, we would have to buy a new server and then migrate everybody over to it. And that's, you know, that's annoying. If people need more disk space, that's annoying to have to, you know, then set up new servers and move them over. And so I think the goal always was, you know, if possible, could we move towards a virtual system where we could expand disk storage on the fly, where, you know, if we needed a larger server, we could do a quick reboot with a larger server. And so that actually came in really handy this weekend, didn't it? It did. So we had to jump from 16 gigabytes to 48. But when we did, and I'm not surprised, because like you said, we had a lot of, you know, super-powered sites on it. I mean, you know, people who use it. And I think it's comfortable. What we found with Sevado and some of the other transfers we did is after it's settled in, after some of that, because it is a little bit intense, you know, you can even maybe draw some of that back if it's still not pulling. But like, it's just beautiful to be able to say, okay, let's superpower it to get through the weekend and then reassess. But it's interesting because you're titled this show, which I think is right from dedicated to virtual. And what I've loved about it is, you know, we've become pretty comfortable with doing these. We have a weekend plan and we've gone through, I don't know how many this summer, like 10. But we've finished. We are off reliable site, which for me feels like a milestone to reclaim. Absolutely. We planned this. We've been doing this for two years and we've been slowly moving stuff over. We've been planning this and the fact that we just buckled down the last two years. Our entire infrastructure, it's interesting, Tim, is no younger than 2017. Yeah. So everything we have is a server that we've, you know, basically spun up. I think the oldest server right now we brought is December 2016. Yeah. So basically less than two years old. And that maps pretty closely to when digital ocean started offering block storage. They started, I believe, in the summer of that year and we started doing some testing to see is this going to be viable? Is this something that we could actually use because the block storage offering in digital ocean allowed us to have the nice fast virtual server but have a terabyte of disk space or something like that. And so we really could start to mirror some of the setup that we had, you know, through reliable site but in a virtual instance. And so that's when we started making the move. But obviously at that point we were pretty well established and had a lot to move over. So yeah, and we've made a big push, especially in the last six months, you know, and that's been a very good thing. Yeah. We got cocky. Well, I don't know. It's just because he was old. But we're like, yeah, we're going to take two legacy ones and put them on one new one. But there's a certain virtue there because the latest digital ocean server for shared hosting is Clash, which was our first reliable site server. So there's a kind of nice, you know, book ending the time at reliable site and everything over now in digital ocean. I just wanted to take a moment in, if you don't mind, and read through some of the names because we've repurposed Clash. Yeah, absolutely. But we could actually repurpose some of these. So here's some of the servers that ran on reliable site that we have retired. Bad Brains. Oklahoma's first. That's right. Banshees. Banshees was CSU Channel Islands. Yeah. That's right. Heartbreakers was Emery. Mm-hmm. Ramones. Ramones might be one we bring back soon. That was our second shared hosting server. Saints. Saints. Shared hosting. Hot Rods. ClusterDoo. Minutemen. Butthole Servers. God, the list. And then there's a bunch of university ones, but they were run on like Blondie, Generation X, Sonic Youth, Fugazi. We repurposed Fugazi. We actually have a second Fugazi server now. Replacements. These are all the servers that we actually have retired now. And then we got smart. And Tim was like, look, I can't keep up with these band names for these servers. So we're going to actually name them after the school. Yeah. But we had how many in total did we move from dedicated for reliable site? So we had in total 36 dedicated servers at any given time in total that we purchased. And it's actually a good point, what you mentioned about the schools and the way we were doing that, because this is actually not our first foray into virtual servers. It started a little bit as an experiment to see whether or not it was something we could run ourselves. And so, you know, what we would do was buy a large dedicated server, much larger than the kind that we would buy for shared hosting. And then we would install software on there to make it virtual. And, you know, we kind of were like, well, we'll give this a try. We were using a piece of software called Solis, S-O-L-U-S. Solis VM allows you to install some software. It used KVM as the back end for creating these virtual environments. And so we would buy a large server and then we would put, you know, three or four schools on there. And they were all their own dedicated virtual instance within this larger server. So they didn't necessarily have any cross communication between them, but they were all on the same box. And that was sort of our in-between way to get the storage that we needed, but to also do something that was virtual. So we could, to some extent, you know, increase the limits on things, you know, change stuff around, expand the hard drive and stuff. But I think we found, you know, we probably did that with Domain 1 Zone for the first maybe 12 schools that we brought on somewhere around there. And we were able to do Domain 1 Zone very cost effectively that way. But I think we quickly found like running your own virtual environment was not a business we wanted to be in. You know, in the same way that I don't necessarily want to run my own data center and have to manage servers in that sense, be driving up to rest in Virginia and, you know, late at night. As bad as this weekend was, it wasn't that. And I could see, you know, it quickly devolving into that. So I'm more than happy to offload that level of infrastructure management to another company. And so, you know, once we were able to start moving to Digital Ocean, we were able to provision VPS directly with them, one per school, rather than having to do this weird situation where we would buy a big server and split it up ourselves. And so that increased, you know, exponentially the number of migrations we had to do. Because in addition to these, you know, 30-some dedicated servers, we might have one server on reliable site, but we really have to move like four schools over in order to close down that one server. So it was quite a bit of work. And I'm glad to see that it's done. Yeah. I mean, it was. It was a drain. And it's nice, too. Because, I mean, with Digital Ocean, it kind of dovetails nicely with the conversation we had on Friday, which Tony Hearst, geniously blogged up today, was this idea of, you know, in these virtual server environments like Digital Ocean, and we started looking, and we get this peak at what he's calling a server-less environment, where basically, you know, the server, all of that stuff is taken care of behind you thanks to the containers. You don't see it as a client. But it would be interesting to see in another five years, like, what our server environment will look like. You know, will Digital Ocean be, like, you know, it's obviously been amazing for us, but like, is there, you said how fast this moves, I wonder if there's a kind of server-less environment where we're starting to work with containers and APIs, and that stuff is provisioned, you know, almost outside of our can entirely. Sure. Yeah. But, you know, the danger there is always, you know, the joke, and I think I had a sticker on my laptop at some point this, like, the cloud is just someone else's computer. And so, like, you know, what we might call server-less is absolutely not server-less. There are servers somewhere, but it's about who owns them, and who's running them, and who's managing that. And if it's not you, it's someone. And there's, don't get me wrong, there's total benefits to that, you know, by any means. But, like, I was just playing around this weekend with a piece of software, and I again tried the ZiteNow stuff to run it on that, and it started up cleanly, and I was like, oh, this is really awesome. And then I was going through the instructions on how to create an admin account, and they said, well, you open up the database, and you have to run this SQL command. And I didn't have access to be able to do that at all. And I was like, oh, okay. So, like, they had a Docker container. It started up automatically. Everything was running, but without that level of access, I couldn't get any further. So I think there's a balance there, a marriage between not just the hardware and the infrastructure, but also the software is going to have to evolve to be able to use it in those environments. That was something that struck me as well. Not everyone's obviously going to need to be able to do that, you know, but, oh, Mecca, if you need to upload a plugin or theme, you have to have FTP. So you have to have access to the file system. And in places like WordPress.com, you don't. You know, places, you know, there are some environments where it's made very easy at the trade-off of having the level of access you need to get, you know, under the hood, so to speak. So it's interesting, too, because that brings up the point of while we were running our own virtual server environment, when we saw DigitalOcean and we realized that they kind of figure out those pieces, no problem stepping back and saying, go for it, right? Well run it there, you know, they're a good relationship. You know, they're a fairly young company, but they've been growing and scaling. So I just wonder why we don't see that with something like Now or some other Heroku or whatever that environment is. Just to think that, you know, you could on Friday drag and drop a Docker application and it'd be up and running. It's a very interesting environment for us to aspire. In fact, this is if we were doing it, we would probably in some relationship be behind the server, right? Like in some ways, it wouldn't be like we'd be saying, no, we're hosting now with another company and we're just to go between. But it's just interesting to me because I am fascinated by this stuff partially because I may ignore it. But the other part is because I do think that the ways in which these new server environments invite the notions of sight. It was interesting because when Tony Hurst called it serverless and the idea of serverless that he's talking about just to be clear on terminology for people watching is you're no longer going and spinning up even a virtual server and then running a Docker container on that. This hosting company now is taking care of all that for you. So you never even see the server, you just deploy a container and it's being hosted on a server, like you said but in a sense, it's serverless. Right. So the idea for me, I thought of is serverless, headless web development. You know, we need to be called ourselves not reclaimed, but hostless. Hostless. That's what we need to. I think that's it. Like we are hostless hosters. Just think about the marketing you can do that. You don't need us. Yeah. But please check out anyway. Please, please spend some money. I'm not your own one. You know, all it's all out there. But I still think there, I mean for me now represents and as many of it, he brought this whole application and that's what I want to talk about. We need to have Tony on and maybe have him talk about Binderhub. Yeah. Because there's another kind of, you know, what he was calling serverless and explaining why it's serverless and he has a very kind of systematic way of thinking through that which I liked and was helpful. What does this next generation of hosting live? Like he was talking about the cPanel which he called first gen. The current generation is something like DigitalOcean and the next generation might be considered something like Now which for me is a beautiful kind of like panorama of where we were, where we are, where we might be going. And I'd love it if Reclaim Today kind of had an ongoing thread. Not all of them, but some of them always returning to this idea of infrastructure and how these things scale and what that looks like for us because I'm always interested in hearing people talk about it or write about it. It helps me conceptualize it. Just like when I hear you and Michael talk about headless web development or Tom Woodward I dig that. One of the projects that I want to play around with more is it's got a weird name but it's called Kubernetes. It's developed by Google and it's related to Docker but it's sort of like a cloud environment for Docker. But what's interesting and it's sort of that evolution is that instead of just running going from dedicated to a virtual server, now you're talking about an infrastructure where you have multiple servers and they're all kind of connected and things could be dropped on any one of a number of servers. So instead of saying, I'm on Who Scroo-Doo or I'm on Minutemen or something like that, it's just sort of like you're really in the cloud because your stuff could be on more than one server and what's nice with that environment too is that if one of those servers goes down it'll just pick up on another one. So like it's being duplicated and copied and replicated across that cloud environment. So that's something that I plan on starting to play around with more as we get into looking at Docker containers and that kind of stuff because obviously you have to run your application in a way that's supported in those kind of environments but that can be a really powerful thing because then you can have your cloud consist of servers over in New York and some over in San Francisco and some in Amsterdam and all over the place. Well that's a really good point and it's going to wreak havoc on our naming system. But beyond that what I really like about that and what for me is interesting about that is because I remember when I about 2015-2016 when we started to get serious about Linux because we also used Linux for a while. We played around with AWS and then we decided to stick with DigitalOcean. I think that was a good decision. But I think the thing that struck me is when Linode had that like weak outage in Atlanta in their data center and luckily we only had a couple of things there but when we lived through that in terms of like a couple, I don't remember which, like that can't might have been down or... It was their Atlanta data center and they were getting DDoS'd to hell and just I guess could not deal with it. And luckily Linode was where we were putting a lot of virtual servers but we had been putting a lot of them in I believe it was newer data center and so none of those were effective affected by it. VCU rampages and other programs and projects that we were running but that camp.org was affected and one other one that I can't even remember but luckily it was just servers but it was like a week and it was up and down and up and then it'd be down for like two days and they would just continually be saying we're trying to work and deal with it and our hands were completely tied. We just had to sit there and watch the fallout happen with that and it was awful. Which is the other side of using data centers and it's like one of the things we've been thinking about is like and now that you talk about Kubernetes or Kubernetes or however we want to pronounce it wrong just because of who I am but I do think like that starts to bring up questions of failover and so at one point a node in digital ocean goes down I mean they had issues in Amsterdam not that long ago the Easter massacre where basically someone's sysad in life was held for a day on Easter Sunday and Amsterdam had a problem with their storage basically I think it was with their block storage and shit went to hell and people were out for 36-48 hours and at this point you know for us that becomes a really big question mark where do we, how do we mitigate for that? Yeah and cPanel hasn't yet built in a lot of tooling to help out with that at all. It's still very much framed in terms of that management software as you're running it on a server and you're good to go and if you want to do replication you'll do it at the site level like somebody could set up their own little r-sync situation that kind of stuff but they don't do things like replicating databases they don't do things like clustered computing at all with that so I do think getting there is probably going to mean also moving away from cPanel so it's going to be a big move in general we're talking changing out the management side of things both for the end user and for us we're talking about like real infrastructural change when we talk about Docker versus just lamp environments and so a lot of change necessary I think to make that happen not impossible to do it by cPanel by any means you know we'll probably talk about it on another episode I'm sure where we recently did this with a huge multi site vcu ramp pages and it's possible to do it but there's a lot of manual work involved and there's no tooling in cPanel to assist with that so it's interesting that there's a lot of benefits in moving strategically but slowly towards this aspect of you know hosting and infrastructure yeah and it feels like that was what the last two years was about and getting a digital ocean was a real like I mean I definitely have felt good about the move and being done with a reliable site and not looking back but what I think and not looking back but what's coming is I mean it's probably going to be a recurring theme on Reclaim Today as it should be but you know what's coming and what that looks like and being on virtual servers like we are now will make whatever our next move is I imagine fairly cleaner than it has been with the dedicated servers between IP addresses and everything else yeah absolutely I'm looking forward to it I think this was a big milestone for what's coming yeah I agree I was thinking do you know the Jim Carroll song I don't people who died died so he does his song where he was basically he wrote the basketball bairies and he was like a junkie from New York and he kind of Leonardo DiCaprio played him in the movie that was the basketball bairies about him being a young addict but anyway he wrote a song called these are the people who died you know Jimmy sniff glue jumped off a building he died right and then it was like little Mary took too many red pills she died and it goes on Billy got leukemia it's just it's not a very uplifting song but we could do a version of that with our servers like who can do was on reliable site it died I think that would be like let's commission the dead mookman to put something like that together after after their incredible incredible performance at our last conference I think you know people are just itching for more from them and so we want to give the people what they want I think I think one of the misconceptions about the dead movement I'm going to be clear here is that like talent is not what punk rock ever defined punk rock and I think that really is why you know I think punk rock has moved away from his roots and it's all about like you know indie rock but it was really about talentless people performing pretty poorly and I think that's people couldn't get that and like emptying a rooftop bar like that was what punk rock was really like in 1979 well hopefully that's not too much of a metaphor for how the migration went this weekend I don't yeah are you alright I'm good look at me I'm young I feel great man I feel really good you look good man I'm telling you this migration was good for you I think it kept you on your toes was therapeutic they they always are in different ways are you going to miss them oh I don't think they're gone for some reason I imagine we'll continue to do them in some way but yeah true that's I think though I mean but when I think about it I remember Zach Davis said to me I know you want to end this episode but I'm not going to let you Zach Davis said to me like the hardest thing about if you go into you know hosting which he's like you're stupid to haha but if you go into hosting the hardest part is going to be keeping your infrastructure updated yeah no it's true yeah and I think we've kind of nailed that yeah I recently not recent I mean it was in the last six eight months something around there but we run two name servers ns1 and ns2.reclaimhosting.com and they actually run a stripped down version of cPanel specifically for for DNS and they've been running very reliably since the start since 2013 with no issues at all however they run on CentOS 6 not CentOS 7 and cPanel is no longer issuing updates for CentOS 6 that's end of life and they started by saying we're not going to update the DNS software anymore because for whatever reason so it stopped getting updates completely they still function fine and I finally decided okay I'm going to need to update this and unfortunately there's no just update my operating system button like there is in Mac or Windows for when you have to move from major distribution you're essentially reinstalling the server at that point and so and luckily it was virtual already so I could take a snapshot of it luckily it's duplicated you have ns1 and ns2 so I was able to take ns1 down and rebuild it and then move everything back over to it and bring it back online and I did that a couple months ago I haven't done ns2 yet but even stuff like that where it's something you find out like oh I've got to rework these things and basically you know start from scratch on another server and then migrate over again yeah the snapshot tool where I'm about to do something and I take a snapshot of the server before it all goes wrong save my ass at least three times in the last six months and so I've become addicted to it because I know like even if something goes wrong I can replace back to that snapshot I took and it costs you know because you can delete it once everything's done so it'll cost you pennies and it's no time it's crazy yeah makes things a lot more flexible I think Daphne is wanting us to finish up our episode though I don't know I think you're blaming Daphne but you I'll get off if you want me to get off I'll get off what's the name of the new dog oh you're gonna like it so instead of Marma Duke it's Bava Duke that's literally the name of the dog Bava Duke yeah it was Duke or Duke but I hate the name Duke and so I changed it to Duke and then I realized Bava you know what Bava means in Italian it means spit and the dog is a drooler drools everywhere so when I sit to the kids and I was like you know what this dog should be called Bava Duke they were finally like yeah so the name is Bava Duke and it's a freaking handful I took a two hour walk with him at 6 30 in the morning just to keep his energy down because he's a hunting dog and he can jump like he can probably jump over a six foot wall like no shit he's got like springs for back legs so he jumps like a horse and when he jumps he's huge he's a big dog he looks like a horse jumping wow it's amazing I think I got it over my head I know the feeling alright buddy hey it was good talking with you absolutely big fan alright and look you have the mooch videotape behind you was that on purpose? it wasn't but look at that mooch goes to Hollywood and Bava Duke goes to Italy that's right it's awesome beautiful alright everybody thanks for listening we'll see you next time bye