 Hello everyone. Thank you for attending my talk today and welcome indeed to this year's virtual conference my name is James Freeman and Today I'm going to be taking you through digital twinning for infrastructure testing now Destructive infrastructure testing. In fact, you'll notice that the title on the agenda is somewhat longer than the title on my title slide And indeed there is a reason for that. It's very much a case of I couldn't fit my title on the title slide Not without making the font tiny, but we'll get to all of that in just a moment Before we do let me tell you a little bit about who I am for those of you who are watching this live at the conference I will be on the virtual chat facility So do feel free to engage me any questions you want to ask. Please do ask as we go through the presentation and Otherwise my social media links are down the bottom here. You'll see them again at the end of the slide deck So if anybody is watching this on a replay or thinks of a question five minutes after the presentation Which is what I normally do. Please do reach out to me. Be so happy to hear from you and engage Now, who am I that great existential question? I am a consultant at little bespoke technology provider called a 24 originally headquartered in Tokyo where this Conference should have been held this year were it not for a pandemic and based mostly in the UK where I help a lot of Businesses all sizes big and small implement their open-source technology needs Now on top of that I run my own little empowerment coaching practice. I am a reiki master teacher I am a father to two wonderful boys on the autistic spectrum and my roots take me back to and being electronic engineer I've always been around technology probably last 20 years or so But it didn't start off in open-source and software it started off very much behind a Soldering iron and a CAD program and designing circuit boards and that kind of thing So I'm never happier than when I am putting things together inventing things repairing things that kind of thing. I Am also a big fan of GTD of David Allen's getting things organized and you will see a flavor of that as we go through today Indeed, there is a reason they sort of methodology behind today's story how it evolved and what I'm going to share with you and Last but not least you will notice a little bit of a bias. I must admit towards Ansible throughout the presentation I have got three books published with Ansible. So I'm afraid full disclosure My bias is definitely going to be towards Ansible Whenever I have something to automate particularly in the open-source space. So please forgive me that As I say, I hope you enjoy the presentation And let me take you on a bit of a story so with homage to the classic joke about the horse and the bartender says why the long face why the long title so big that it wouldn't even fit on the title slide and Ultimately, I suppose when you're pitching for an agenda a sport at a conference. I wanted to be descriptive I wanted to put out there exactly what it is that I was going to convey and Ansible is a big thing again. We'll touch on that a little bit later And I wanted to really sort of put out there what it was. I was going to talk about but also the why and the real reason behind all of this is not just for infrastructure testing but the ability to do things that Normally you would think about doing twice if you you know If you'd spend hours building a physical a really nice shiny well-built production infrastructure, and then you went and sort of blew it up You would normally think twice about that However, there is a concept I want to talk about today Which I think lends itself very much to testing those use cases to test in those scenarios that we don't normally think about the kind of things What the people say all that shouldn't happen? So let me tell you a story and let me tell you how I solved it and all about How we can get to destructive infrastructure testing through this thing called digital twinning so like all good technology stories this started with a simple request and You can decide as we go through this whether I over complicated the answer. That's a completely different point the CEO says to me build me open stack and a really nice tightly bounded request, however Open stack is one of those things that It's really easy to build a sort of all-in-one single node deployment Which is great for kicking the tires learning about how it's installed at least on a single node basis. What services are running? I've done it a number of times working examples for potential clients for one of the books that I mentioned Even for a professional certification exam The whole model of the all-in-one node for me falls down when it comes to doing this for a production kind of environment and indeed we were looking at this in a production context and When you want to actually learn a technology for a production environment It's so important to know what the weak points are how to deal with scenarios when they come up So what happens if one of our switches falls off the network? What happens if this particular node dies? How do we recover from that scenario that kind of thing indeed what happens if someone fires off a denial of service attack against us and So this for me I'm no open-stack expert again full transparency on that one what I really wanted to do from this exercise However was to really learn open stack and not just on a level of I built an all-in-one node But how to deploy it in a production kind of architecture so that we as a business could learn from it and grow and move forward and Having thought all this through quite carefully and decided what I wanted to do I was presented with my constraints Which was here's a single blade to build it on and it has mechanical storage and Little sort of spoiler towards the end of the presentation don't use mechanical storage, but that's that's a bit of a separate point so Digital twinning first of all if you've not come across this concept before it was According to the law of the internet at least something that evolved at NASA and it started out with the very first space Capsules so if you're building and designing a space capsule and you want to know if it's going to work if the astronauts are going to Be able to sit in it whether they're going to be able to interface with the control panel reach the buttons all those sort of fundamental basics of designing I suppose what would be a user interface of sorts you need a mock-up and So NASA built actual life-size models of the space capsules before they built the real thing to make sure that they got the design Right now this obviously was very early on before all the technology and computers that we take for granted today And it's something that evolved into Computer modeling computer simulation and so this Concept of building models of twinning the sort of physical twin morphed into the concept of the digital twin That being a digital representation of a real thing Now obviously that's mentioned it with respect to to space travel and that Field but really it's a concept that can be used anywhere It's a high-level concept that says we're going to make a Model an accurate representation of this system this design this whatever it is really and so Having discovered that this very thing existed and that I wanted to to learn OpenStack Properly from an architectural point of view not just from a Single all-in-one node point of view. I decided to set about building my own digital twin So bear in mind I only had one blade, but I wanted to build something that was an accurate representation of a production OpenStack deployment even if it was a small one at least I wanted to be able to say it's got high Availability I can take nodes out of service and it keeps going Now the next piece of the title with Ansible and cloud in it so I've built OpenStack a few times as I mentioned in the single all-in-one node context and I've learned the hard way that I'm not an expert and I'm probably not going to get it right first time and When you don't know a technology and you're getting going for the first time and Type particularly when time is of the essence as well There are times when it's actually easier to rebuild to go. I got that wrong I messed that up for whatever reason. Maybe I didn't understand the configuration Maybe I was just testing something and I wanted to To take a look at it from that angle and then try something out Ultimately, it can be easier to just rebuild parts of the infrastructure or indeed the whole thing that it can be to Have to pick through configurations to fix things to reconfigure bits and pieces so I'm thinking really here and I put on the slide docker-like behavior. So I'm thinking of this concept where we We don't treat Images virtually architectures like like pets. We treat them like kettle to use the classic analogy we When something doesn't serve us we simply build a new one because it's quick and because it's easy but Although I think this was a fair hypothesis. I knew that I Didn't want to spend my time doing manual tasks like For example building Linux servers from an ISO so booting from the ISO Clicking through the install or setting the initial IP address setting a username and password bonding so on and so forth and Particularly if you think I was going to build a sort of virtual architecture with perhaps ten nodes in it and This was something I was anticipating doing more than once So I did not want to end up doing all those Even the basic steps over and over again because it would be a massive waste of time that could be spent learning the technology And doing something more constructive so It was sort of immediately apparent to me that okay I'm going to need some sort of automation stack some sort of software that's going to automate the deployment process so first of all why Ansible so I've been fairly upfront about this yes a huge amount of familiarity with it and I Was looking around the internet bit of research for this and other presentations I've done recently There's a link at the bottom of the slide that you can go check out if you wish but Ansible has really boomed in Popularity over the last few years. It's really sort of come Almost out of nowhere. It feels like and it's sort of accelerated beyond so many of the sort of chef and puppet Installations just those two examples that were really quite well established in the market now. I Work quite a lot with Ansible in the field So not just as a as an abstract concept of the book but actually in real-world scenarios and I think two of the reasons that it It's gained such traction is first of all that it's agentless That's one of the really big factors in that decision It's something where from a corporate point of view You don't have to roll out a new agent amongst the hundred other agents that you're probably managing for endpoint protection and and and It uses the native transport So it uses SSH which is so important in scenarios like Linux because you've got SSH already embedded and in switch Configuration because if you're configuring network devices those devices have in certainly in 2020 They've almost certainly got an SSH interface to them, which is it's just brilliant. It just lends itself to Ansible so very well Beyond that Ansible is idempotent that wonderful words that you hear around automation circles That basically means if I write a playbook and admittedly Write it well because you can write a bad playbook just as you can with any other form of code then You should be able to run it once twice a hundred times against the same infrastructure And always end up with the same state at the end of it So it really doesn't matter if you run it over the top of a previous run In fact, you should be able to Break one thing and then rerun the playbook and it puts it back how it was that is the whole concept of being idempotent It's basically saying my servers have a nice steady state And ansible is going to put them in that state and if they ever leave that state you can put them back into that state with ansible One of the other things I really love about ansible is the code is often described by self by or even as self documenting and That's a term that I've picked up with that. I believe to be very much true I think that ansible code is is quite easy to read There's a few bits of it, but I think that to go from zero with ansible to meaningful automation is A relatively rapid journey and if someone else gives you a playbook or a role or something like that It's normally quite easy to pick it up and even if you don't really know the language to go Okay, I kind of understand what that's doing At least I've got an idea and I can go away and do a bit of research do a bit of searching on the internet and figure it all out Now of course when you're automating any set of tasks sooner or later, you're going to come across secrets Passwords that kind of thing and deploying open stack is no exception to that Open stack has secrets passwords that kind of thing that are necessary for Configuring it for initial login for services to communicate with each other and that kind of thing and again Ansible has built-in support for that in the form of ansible vault. So ansible vault essentially Encrypts sensitive data like passwords at rest, but you can use them in playbooks as if they were unencrypted So brilliant addition to any automation technology To me ansible was just a really natural fit for this and Beyond that there is the wonderful open stack ansible project So not only was I going to do my sort of basic bring up and configuration as far as possible with ansible There is as I mentioned a really wonderful project called open stack hyphen ansible. It's available on github A well staffed I spent quite a bit of time on their IRC channel with and they helped me out loads whilst I was working on this project and so just logically In my head it was like very much. Well, if We're in a state where open stack itself is deployed with ansible Then there's no point me bringing up the rest of the environment with some other automation technology I want to reduce this to a nice simple level where I'm using as few technologies as possible But really picking the right ones. So ansible for me was a very very natural fit here Now Why cloud in it? How has cloud in it snuck into this arena? Well ansible is really really great when you've brought up your infrastructure. So your vms have booted up They've got that initial username and password or it could be ssh key for login But they've got that initial authentication mechanism. They've got that initial IP address and Little little sort of teaser there as well. You also need python install. So I mentioned that ansible is agentless It is indeed it does however expect python to be installed on The virtual machines or indeed physical machines that it's automating now Most modern linux distributions do feature python, which is why This is a fairly reasonable assumption for ansible to make But some of the really minimal cloud images that you might come across for linux from some of the vendors They don't have python built in it's not included simply to keep the image really light and really compact And that's obviously great from a space saving point of view It's a real pain if you want to automate with ansible because your first step is potentially Worst-case scenario to manually have to deploy python across all your nodes And that's really something that you don't want to have to do Now as I mentioned a couple of slides ago To bring up Blank linux images and configure them with usernames passwords ip's Even put python on none of it is rocket science. None of it is difficult But what it is if you have to do it at scale and if you have to do it over and over again And remember my hypothesis. I'm probably not going to get this right not even on the 10th try never mind the first one The last thing you want is to be doing this repetitively at scale Cloud and it is something that is baked in to most of the linux cloud images you will find out there In fact, if you have experienced linux on any of the major cloud providers And not picking on any particular names, but you know the the big ones off the top of my head like azure and aws For example The initial bring up of those is almost always The machines are configured with cloud in it cloud in it is used within open stack itself to configure the vms Again to configure the initial login and their initial ip address So they mean that you can have really clean vm images with no metadata And they download everything they need from this cloud in its service from a known url or provider source now Even when you install a buntu server from the iso so you might install it on a physical piece of tin But when you install it if you have a poke about in the system and look at how it's configured How it's set the ip address and all that kind of thing it's actually configured with cloud in it So what canonical have done with their avonto distribution is they've said cloud in it is kind of here Let's put it in across the board even when you're not using the cloud because that way our builds are consistent And I think that's a really great idea And cloud in it as well as working with as the name implies cloud platforms It also works really well with local platforms and one of the ways you can get data into cloud in it Is from an iso images now Isos physical optical discs are obviously going the way of the dodo. In fact, I think they've kind of already done that in 2020 but attaching an iso image to a virtual machine is something that's still commonplace easy to do well supported And so this is a really great way when you're working with virtual machines to get cloud in it data in Even if you're not using a public or private cloud platform and cloud in it I've seen it do a lot of the basics the Here's a static ip address. Here's your dns servers. Here is the initial username and password of someone who's going to log in What I learned as part of this that I I just hadn't appreciated about cloud in it as a technology Is that it can actually do the full bring up it can configure the bonding It can configure ssh keys. It can run arbitrary command So if you've got a cloud image without python installed it can do If you're using ubuntu, which I ended up using You can do apt get install python as part of the initial bring up as well as all the network and everything else So cloud in it for me works really well hand in hand with ansible and I think There's obviously there's some overlap between the tools because you could do initial scripts in cloud In that do all the bring up that you want to do but What I did was I drew a line in the sand as as one does and I said I want cloud in it to be responsible For just enough configuration that the VM stands up by itself And then I'm going to let ansible take over and do all the heavy lifting There's no right or wrong in doing more with cloud in it It's just a case of drawing your own line in the sand and working out where that's going to be for you Where you think that's going to work best now hardware I had One blade for this task and it was fairly apparent to me that in spite of my burning desire To build something that looked like a highly available production Open stack architecture that I wasn't going to get A hand, you know a handful of switches a handful of blades or pizza boxes or whatever it was to build it on and so It was like okay fine. I think it's pretty plain to see that I'm going to need a hypervisor to achieve this and I work in the open source space so it needs to be free and free as in open source free as in gpl or something Appropriately licensed fast is obviously important because of the amount if you imagine I was going to try and put 10 nodes on one box Doing a fair amount of work. It's got to be fast. I don't want it to be a heavy weight cumbersome emulation layer it needs to be flexible and Bearing in mind the open stack obviously is its own virtualization solution outright It needs to support nested virtualization I need to be able to pass those cpu cpu flags through to the vms so that they can do nested virtualization Ideally, you know, we all know that nested virtualization is not going to be fast but at least it would allow me to get the functionality up and running and Spoiler the cpu support for nested virtualization was there. So that was one battle. I didn't have to fight now Linux kernel virtualization just seemed like the absolute perfect fit for me It's just there on just about any modern linux that you install. It's free. It's lightweight. It works I know it supports nested virtualization because I've done it so A bit like ansible for me fell into place as the cloud in it as natural choices The the libvert and the linux kernel virtualization Just fell into place as the natural choice for this to work with so What about the switching we've talked at length about bringing up infrastructure about multiple nodes about Really bringing up the whole thing What about the switching now? For me, I needed something that I could virtualize to something that I could run on kvm Something that was modern something that was fast But also importantly something that was free because I'm sure most people know you can run things like sysco switches emulated on tools out there However, there is a legal question mark over downloading those ROM images and running it. So I didn't want to fall into that gray area I wanted to choose something that was genuinely free and legitimate to use as this context And the answer came to me in the form of cumulus networks who Since I started all this work have been acquired by nvidia, but they produce a Linux distribution called cumulus linux that runs on white box switching hardware. So we're talking sort of 130 plus platforms at this time And when you log into it, it looks a lot like debian You can use all your linux schools to manage it, but yet it's a fully featured switch management platform Which is absolutely fantastic and they very kindly release a toolset called cumulus vx Which you can run in just about any hypervisor. So that was like fantastic I can download cumulus vx and I can configure a modern sort of switching high availability switching architecture within My chosen hypervisor and I can have not just redundancy at the open stack layer But redundancy at the switching layer as well now This is all great lots of decisions made lots of planning put in place The thing is I am very visual in the nature in the way that I learn and the way I process things I knew that if I tried to build something of the order of 10 linux nodes with five switches So this is sort of leaf and spine switching with um an out of bad management switch My brain was going to melt if I tried to wire that up using You know tap adapters bridging linking ports within xml files for libvert that kind of thing And bearing in mind that again, I was going to follow I decided at this point I was going to follow a reference design from the open stack ansible project This is where the 10 nodes come from and so I was going to be using vxlan I was going to have two physical networks because obviously you don't want to be using bandwidth for storage And then have people competing for that bandwidth for their actual functional applications There would be vlan's on top of those physical networks It was just my brain was absolutely going to melt if I tried to implement this all in flat text files so I wanted to commit sort of the ultimate heresy of the open source space and have a GUI for this I Cubulus networks if anyone's interested on their github site, they have some great tools for developing architectures for wiring up large networks I think it's based on vagrant if memory serves Do go and investigate that if you if you are so inclined and you do want to do this in text mode But for me, I wanted a GUI and I'm not ashamed to admit that here today Now My mention of heresy came about in the form of gns3 and I had heard about this a number of times in my career dabbled with it a number of times This sort of come across it in the context of I wonder if it's worth me getting a ccna learning some more about switching and so on But it turns out where I Pulled out gns3 because I liked the GUI on it and as I started to poke about what I realized was that under the hood Gns3 is quite interesting in the way it works It uses linux kernel virtualization. It's designed to Support cloud images so you can download your Ubuntu cloud image or your clemenos vx cloud image and run it almost straight off in gns3 You can mount iso images easily which means we can get our cloud init data from that iso And there's all you know You can do all the stuff that you'd expect to be able to do on the command line with libvert So you can manipulate the backing images. They're standard kukau 2 based images So you can use lib guest fs You can use q emu hyphen image to manipulate them inject files that kind of thing You can add any command line switches to the q emu binary for running it So that's how you can get your nested virtualization turned on essentially It wasn't designed this way, but it's a really great interface to linux linux is built in kernel hypervisor And it's free. It's open source and it has a GUI Now this here was the end results for anybody who wants to know what it all looked like Or be it's powered off in this screenshot wired up now unfortunately getting this in and Doing a live demo isn't possible in the time that we've got but this is what it looks like now I think you'll agree it would probably benefit a little bit from some orthogonal lines something like that But the great thing about this for me was that each of those network links You can drag any of the nodes around to make that easier to see I've kind of squished it up a little bit So it would all fit on the one screenshot there But you can right click on any of those links You can take them away. You can sniff the traffic on them to see what's going on So you get this really great tool to play with the network and see what's going on and To manipulate it if the need arises and each node you can right click on you can power it on power it off Get into the terminal of it. It's a really great way to To remove the headache the brain ache of trying to manage an architecture like this when you virtualized it all on one piece of tin now the end result was By passing a lot of story and a lot of squaring and everything else that it all worked and It really did actually produce a real working representation of something that you might put together in the real world for open stack And I think that was great and the great thing for me was it was all deployed with ansible and cloud in it so the whole point of that was This is something that you could do again and again Now drilling down into that, you know, obviously that's that's a fairly quick statement. What does that actually mean? Well The gns3 environment is completely isolated from the rest of your network The playbooks and the cloud in its scripts that were written could be used to bring up a real world You know installation architecture of open stack on physical tin You could even replicate mac addresses if you wanted to and ip addresses that you would use in production in gns3 Because the networks are isolated. They're not going to overlap. So you could test every aspect of this It is a complete digital twin and The beauty of it is as I say because you can twin absolutely everything You can literally develop and test your ansible and your cloud in its scripts and playbooks and What have you in gns3 and then use exactly the same ones on a You know a rack full of hardware and hopefully get the same results So it's not just useful for actually testing and learning about the architecture It's useful for testing and learning about the automation process too Now this process can be applied to just about any technology So this was quite an ambitious starting point with open stack But I've since used this to build any number of demo environments for clients proof of concepts Training environments that kind of thing. I found it's been really valuable And I've tweaked and enhanced the scripts the cloud and it build environment and things that I used on this initially It makes a really great training tool and because it's got that visual appeal It's particularly good when I'm working to do pre-sales work as a consultant. I can use it to Show people visually the kind of network that we're looking at to build a solution And then I can drill down into it actually run it get onto the console of boxes or get onto web interfaces and show them how it all works So it's called this massive application for simulating real world architectures because very little in 2020 lives on one server Or one box almost everything that we're going to deploy particularly in a production context needs to be High availability. It needs more than one node. It needs some form of replication or ha or something But what else can we do You know this by itself for me has been massively valuable But what about if we actually get down into the process of breaking something? Let's do something fun. Let's do something destructive now Here's some ideas that I came up with I've had time to try a few of these the list isn't exhaustive But this is to give you an idea of the kind of thing that you could do and bear in mind If you'd actually built this on physical hardware Configured every node by hand You know got in there configured the bonding configured the switches spent time installing everything You would be really really upset if someone's came along and said I want to try this attack on your network It's probably going to break it But can we try it because it would be like no it's going to take me hours to rebuild it but The thing with the gns3 base digital twin is you can pull individual network links just Take them down and see what happens You can pull entire switches You can just literally pull the plug on a switch and see if track effects still roots So I'd put in place a little sort of spine and leaf network architecture there that I took from a reference design I found from cumulus linux's application sheets and That's great. You can take one switch out from each tier And the traffic still roots You know you you'd almost you wouldn't go to the data center You wouldn't drive to the data center get the pass get in get through all the security And start pulling plugs on things to see what would happen But you can do that in this environment you can turn nodes off You can launch denial of service attacks brute force attacks Imagine getting in there with a carly linux vm And doing all sorts of nefarious stuff to it The other great thing is business continuity You could literally back this environment up so test whatever your process is to back up the let's say in the open stack world The the maria db database that holds all the config data to back up all the User data that glance and cinder and those kind of services are referencing You could back them up destroy the environment and then test your resource all processes All easily in a completely isolated environment without affecting anyone else And without sort of the risk to any customer data anything like that So it's really really valuable for testing out how you're going to deal with situations that just We we spend a lot of time Putting things together in the hope that something doesn't happen like, you know A raid controller blows up that completely through some weird modus operandi destroys the whole array That just shouldn't happen right, but you know on the very edge cases it does And how do you deal with that? Do you even know if your backups work? Have you tested a restore as people have said to me your backups are no good unless you've had a successful restore of them But the great thing is you can do all of this you can do all of this destructive work, but You can back the whole thing up at the end of the day It's a json file that describes the gns3 environment and a bunch of um literally qcow2 image files So you could create a tar ball of it And if you don't even want to go to that links gns3 actually support snapshots It has a snapshot functionality very much like you'd find in any other desktop hypervisor So you can create a snapshot and you get to blow it up more than once So not like the building where it's like it's come down and that's it now I've got to spend weeks months whatever it is metaphorically or otherwise rebuilding it you can Literally do something evil blow it up and then within a matter of minutes Put it back exactly how it was at the beginning and do it all over again And maybe i'm getting too much into this maybe it's too much fun for me But anyway that I think just has huge possibilities for training purposes for business continuity planning And for destructive testing for security testing Now a quick worked example as we we come towards the end of the presentation We've got three osd nodes in a sef cluster as part of this open stack ansible setup And all of them have one dedicated data. This is a very very simple here You'd obviously have something a bit more in production, but this is what I created We could pull a plug on a node as a starting point Now because we've got three nodes my hope would be if i've configured it right Everything keeps running that the the sef cluster can deal with the loss of one node The data is still accessible Albeit it's maybe being rebuilt on the fly or something like that is happening But it lets us actually test this, you know, you would never do this while your customers. We're using an environment Now assuming that you get that far and that all works as you wanted it to You could then actually go in and you could do something really bad to the data disk So for example, we know it's a flat file. It's a qk2 image You could use dd to copy from dev random to that qk2 image Again, you'd never do something like that in a production environment and probably not even in a testing environment Just because of how long it would take to put it back together if something Didn't work in terms of your recovery process But you could completely destroy that disk Bring the node back into service. See what happens. See how the cluster behaves. Does it start serving garbage? does it keep saying And then assuming it does what you hoped it do document the procedures Even then test your recovery your rebuild processes and get the node back into services So it provides a complete disaster recovery training sort of sandbox environment, which I just think is absolutely fantastic As I say, it's something I've worked Big and small companies for for sort of I say almost 20 years now in this field And most of them never have a test environment. That's quite the same scale as production to do this sort of destructive working Now obviously there are some limitations to this. I'm sure people will have spotted some of these already It's it is being virtualized It's not going to be as fast as real hardware, particularly when you come to nested virtualization And there is some hardware specific stuff that you can't do so being able to virtualize the switch layer has been fantastic But if you're using anything other than ethernet, so if you're using Fiber channel or something low latency If you've got specific raid configurations for drives that you want to test out That kind of thing you're not going to be able to do effectively in the in the virtualized digital twin and You're not going to do this on a standard laptop as well You you just the poor thing you'd bring it to its knees So you do need to throw tin at this for sure to get it off the ground and working well for you Now a few things as we come to the end of the presentation that I learned from the experience now It goes without saying it's why we're having this conference here, but the linux community is absolutely amazing I had so much great support on irc channels, particularly from the everyone at o the open stack ansible project They were so helpful in getting this up and running Do throw hardware at this i in the end I purchased a used workstation for the list with two z on chips in it And I put a big ssd in it and you know, don't attempt this on mechanical storage Do throw ram at it do throw cpu cause at it. It will help enormously And do use vertio for the disks and networking I started off using emulated e 1000 mix and emulated ide for the disks And it works. It's functional It's absolutely fine until you start to push it and then you realize how slow that emulated layer is And as it's all an environment like the one I built is all linux based. There's no reason not to use vertio It's all supported natively And the the other thing that I learned the hard way I started this on a high powered windows workstation because it was all I had to hand initially before I bought the one that I actually built it on Don't do this on anything other than linux. If you do it on say windows or a mac Gns3 runs its whole virtualization layer in a vm on that os so you're already nesting virtualization before you get anywhere So if you want to do nested virtualization, you're doing nested inside nested. It's just like no, don't do it It's going to be a world of pain. Please don't go there Now moving forwards, I've built a whole suite of lab environments in this manner. Um red hat virtualization And uh catello Ansible tower you name it. I've built sort of infrastructures nice little tightly defined infrastructures That I can test that I can figure out configurations on And it the great thing is the link to gtd at the beginning of the presentation It documents all the build steps and all the configuration because it's all there in the cloud in it and the ansible file So I don't have to remember how to do all this stuff I can look at my playbooks and refer back to how I got it all working originally This has been used as I mentioned earlier extensively for demos to Customers clients that kind of thing and I use it for my own needs I use it to help myself learn new technologies and get to grips with that kind of thing And my hope is that by sharing this with everyone today, perhaps we can all learn a bit more about technologies particularly distributed highly available ones as are so common these days using these kind of techniques Now as I mentioned at the beginning Very happy to take questions either now at any point in this conference or if this is over Please do reach out to me on social media. We'd love to discuss this further with you And with that just want to say Again here are my contact details if you want the code that was used to bring up the environment that you saw on the screenshot earlier It is publicly available on my github account. Please do go and have a look It's it's been refined a little bit since the version that's out there But please do have a look at it and again if you hit any issues with it If you want to talk more about it or the concepts, please do get in touch Thank you very much for your time today. I hope this has been valuable And I hope that you enjoy the rest of the conference. Thank you