 All right. Good afternoon everyone. So I'm Nicolas, this is Julien, just so you won't mix up both of us. I'm a technical trainer based in Paris. I'm delivering trainings for AWS all over EMEA and Julien is our beloved technical evangelist for the French region, but he's not really in France. He's all over the world. So today we're going to discuss about running BSD, so free BSD mostly and a little bit of open BSD on AWS and we thought we had some fun we'd have some fun with with those two OSes and so this is a little bit about us I'll start with myself and then I'll hand you the mic. So I discovered open BSD in early 2000 and the spark station that you can see here is the first machine I installed open BSD on so this dates a long time ago and then shortly after that I was you know a little bit alone in the Being French and learning about open BSD. So I started the open BSD France community Which I ended I believe this year early this year by lack of time too much things to do and So I've been learning Learning unixes in general starting from the open BSD background Hi, and yeah, I'm Julian and so I'm an older guy So I want to actually tell you when I started with open source. It makes me feel so bad You can guess by the age of my CD-ROM collection over there most of I guess most of you were not even born so that's a terrible thought and back in 90 Back in 96 I think I Translated this book which it's probably somewhere in your library Which is the French version of of Kirk max Accusing book on on BSD right and I know he was here a few days ago And it's it's a tragedy that I could not even get to meeting because I was traveling. So hi Kirk me again All right, and no shit It's the first time we meet. It's it's a wow and I cannot do this presentation anymore. I Feel so worthless man You should be filming this guy Yeah, I didn't bring it, but that's okay. You made my day. He's a legend. He's a proper legend All right, so now we should be really good, right? Yeah, okay, so here's the agenda for now So we're gonna talk about the AWS infrastructure just for a bit just to to show you what kind of architecture we have and then Nikola is gonna talk about Instances virtual machines operating systems And we'll start some benchmarks, right because it's important to see how fast things are running And then we'll spend a bit of time talking about are you here too, man? I know everybody here All right, okay, so now I need to be double good man shit, all right We're gonna look at how to build Amazon machine images with BSD and it's an interesting process We'll talk about automation quite a bit Then we'll look at the results of our benchmarks, which hopefully will be complete So don't be too fast, right and then we'll take your Q&A and actually we decided we would love to take your Q&A During the session right to make it more interactive. So if there's anything that doesn't make sense Anything you disagree with if you want to throw stuff at us That's okay, you know where we can take it. We've seen worse Okay, a word about infrastructure then so as you probably know our infrastructure is spread across 16 regions across the world These regions are broken into availability zones, which are infrastructure partitions That are close enough to allow for Replication etc, but far enough so that if one of them actually explodes Or if a volcano erupts in doubling probably then you know the other ones should keep running And we have a whole lot of edge locations, and I'm sure this number is already false These are the locations for the cloud front service, which is our content delivery network Which spans, you know the globe now So a region is a number of data centers And when I say number the number is larger than one, right? I know some Let's yeah, let's call them competitors like to call regions One data center for us. It's much more because we think redundancy and HA is really really important So we have multiple availability zones. We'll talk about that in a second They're fully connected with a very very low latency network Usually less than one millisecond, which really allows us to replicate data You know storage database etc to replicate it even synchronously So that's one of the regions and you probably know we're going to open a region in France this year Yeah, this year right so it's between now and December 31st 11 59 a.m. 59 seconds right People keep asking me as yeah, they asked me for the day on Twitter all the time like I was gonna say yeah this year If you want to know more about this stuff, there's a Well, just a brilliant presentation from reinvent by James Hamilton was one of our infrastructure gurus I cannot recommend it Hard enough. It's Yeah, it's legendary, too So inside an AZ we have multiple data centers, right? So let's talk about the French AZs that are coming each AZ will be at least One data center, but in fact it's it's always more right So the largest AZs could have six Seven maybe AZs in the west, you know, there are older regions higher scale so each Each AZ is going to be a number of data centers and again. They're close enough That you know, you can do synchronous stuff But distant enough so that if one is broken or if one has a power failure or or a fire or disaster like that The other one should keep running and when it comes to network latency, they are very very close, right? We're very very low You know much much less than a millisecond so So this shows you that we have a multiple level of redundancy, right? We could lose a data center and not lose an AZ. We could lose an AZ and not lose a region We could lose a region God forbid never happened And if you had a multi-region design, you probably would be okay. I Guess we'll find out when we lose a region, right? Not not soon. I hope And inside the data center. Well, we guess what we have racks and servers That's not really original except it's all custom stuff, right? And if you want to know more about that, please look at James Hamilton's keynote He goes into some of the detail for the servers and the routing and the network equipment, etc It's all custom everything hundred percent Because we think this is where we have an advantage from a technology point of view and a price point of view Of course as well So we decided not to have very large data centers Of course, we could have Much more than 50k servers inside a data center But if they go too big then if one dies The blast radius the impact on the rest of the system is terrible So we stick to I would say mid-sized data centers And we build Many of them right next to one another. Okay makes sense All right Mr. David your turn All right, so let's talk about instances those virtual machines in the cloud and OS is I'll start with saying that we call them easy to there will no be not be any easy three or easy one ever easy to stands for elastic cloud compute and So those machines or virtual machines based on the Xen hypervisor for now And then in the future Some things are going to evolve and we signed a partnership with the VMware Last year, I believe and things have grown really really fast This hypervisor is now available in one of the regions in the US US East one around Washington And then to run those machines you need of course hardware But you also need software and this is where it gets very interesting You have multiple places to Grab I would say something called an AMI an Amazon machine image So an OS template that contains your OS plus some more stuff by default AWS gives you access to AMIs on Well windows you have to have those Linux's and in some BSDs like free BSD For now, and we're hoping to have an open BSD soon very soon and then So those AMIs are pretty basic. They only have the OS latest latest kernel latest patches and all of that stuff But then you might want to have something a little bit more custom. You might want to add your own layer of security tools You know Custom kernel and all that stuff. So you may want to create your own AMI This is by the way, one of the things we'll be attempting to do on open BSD using some of the stuff that Antoine Jacouto has been doing in the past So you can create your own AMI put all your stuff in there and then eventually share it with the entire world Which becomes really interesting when you want to distribute your software In the past it used to be you know floppies large smaller CDs DVDs Download and now you can run your own server with the application already pre-installed so no one can tell you well You've done it wrong, right? It's the software editor or the the person granting access to this AMI that is Making sure that this AMI has all the right configuration the right tools So when you boot it when you boot your instance everything is running fine, and then you know This being said you might want to make some money with this AMI at some point in time and Then in order to make some money with that We have a place called the marketplace the marketplace allows you to sell Quote-unquote by the hour The license to your AMI to your software installed on the AMI so you can either pay a little bit per hour Or you can pay you know per agreement on the one year term Is is the most I've seen so far so it's pretty interesting four places to get your AMI and then inside of This AMI you have the software and the OS and you you want to instantiate that on some hardware So for some hardware we have a little bit of combinations of hardware You'll see that in the next next slides and then and Until now you had to pay by the hour of those resources So the compute the storage and then some of the network stuff starting October 2nd You will pay on a per second basis, which is very interesting because most of the stuff that you run Doesn't it you know require an entire hour of time? Maybe you require five minutes, and you don't want to pay for an entire hour So you'll pay for you know one minute the boot time and then the rest of it will be the running time of your instance So this is coming up October 2nd, and this is only for nexus You know the windows license had maybe some things to do with it, right? and so You may want to you want to say well, you know this VM is cheaper at somewhere else somewhere else's place And it's not really the case You might want to think about comparing apples and oranges We have as you know said a very broad geographical Cover we also have a bunch of services an ecosystem of services running with those easy-to-instances There's about a hundred services now going from very basic stuff compute storage network all the way up to IOT Machine learning elastic search and all of that cool stuff And inside of a region as Julian pointed again We also have multiple availability zones allowing you to have high availability for your workloads and Synchronizing your data between you know the distance those different availability zones So let's look at you know those instance types That's the naming scheme is pretty easy It's the family the generation and then the size of it just like t-shirts We made it pretty simple Except that sometimes the size goes really really big like this one May want to buy some sheets instead of a shirt that size right Toga party something like that so We have GPU in some of the instance families the G3 family the P2 family the P2 is quite exceptional You have 16 GPUs its Nvidia quadros if I remember correctly with about 12 ter 12 gigs of RAM on each of these GPUs so that leaves you quite a few possibilities to compute a lot of stuff right If you're looking at memory optimized instances the R family or the X family, which is the biggest one that we have so far It's 128 cores and four terabytes of RAM with a 25 gig network interface again proprietary stuff that we built below this layer and We're extending it soon To 16 terabytes of RAM so if you're running, you know in-memory databases or caches This is probably one of the sweetest instances to run your stuff on and then so the smallest instance is One core half a gig of RAM and as I said the biggest 128 cores for terabytes of RAM so in those families of instances you will have to choose something that fits your needs and Ideally one of the things that we'll try to highlight with Julian across those this presentation is that The size doesn't matter most of the time Right. Yeah Well, I mean, you know, I'm just you know running in the urban legend size doesn't matter right and so Biggest instance size all the way to the most broad family the T2 family and you can see the CPUs here Or all Intel Xeon and on those inter CPUs you can tap on to a lot of things instruction sets P states and C states control a lot of things are available to you just to tweak your application for that um So today Julien will be performing a few things that he'll tell you about on the I3 family the C4 family and the X1 family So the X1 family, let's dive a little deeper into it It's Intel E7 has well processors the 25 gig network interface is quite sweet and then The C4 instance is the second family that Julien will be using has well processors at 2.9 gigahertz Instead of the usual 2.3 maximum. So that means that we have custom CPUs You know Intel is one of the partners that we're working with and we're buying a lot of CPUs from them And I mean a lot right so at some point in time we want to make a market differentiator We want the same CPU that everybody else can have but at a higher frequency and this is about 30% More performance than what you have on regular CPUs and it's only available on AWS. Okay Yep It's four sockets of 32. Yes, you do Yeah, actually the the the X1 is so it's a multiprocessor architecture with a NUMA architecture So each socket has dedicated memory And that's how you get to four terabytes and serial amounts like that and if the OS supports it, you know, we can migrate Frequently access pages to the closest CPU, right? So if CPU 1 is actually accessing memory from CPU 0 quite a lot then, you know We can migrate that that stuff to the closest CPU, but that that requires our support, right? Thank you Julien The last family that I want to talk about is oops. Sorry the i3 family i3 stands for IO And we mean a ton of IO with this family of instances. This is the newest generation of instances It's using NVMe storage So quite fast and you can get up to 3.3 million IOPS 3.3 million IOPS. This is you know, like fast really and then same thing 25 gig ENI elastic network interface available again on this instance half a terabytes of RAM and 64 cores Again custom CPU for this one as well lots of promising performance for this machine so We have the storage we have sorry the compute the memory we need to know about the storage That's you know what comes next. This is where we'll store our instantiate our AMI as well And so we have two main families of of storage We have classical standard magnetic hard drive and we have SSD drives for those two types of EBS volumes we call them EBS for elastic block store We do have a little bit of advantage So for example for the magnetic hard drive, we're looking more at the throughput of those drives 250 megs per second on the cold drives and then on the throughput optimized 500 megs per second Versus on the SSD drives. We have two types of families the general purpose and the provision diops General purpose can burst up to 10,000 IOPS and you can merge maybe or raid One two three four of them so to go all the way up to 40,000 IOPS Or eventually for your databases where which require a lot of IOPS and we could have provisioned IOPS Which deliver a constant 20,000 IOPS And then again you can raid two of them to get all the way up or multiple of them to get all the way up to 40,000 IOPS Obviously, it's this is probably going to be raid zero surely going to be raid zero at minimum Because behind each and every one of the blocks that we present for those types of storage Are actual maybe multiple physical blocks actually surely multiple physical blocks so that if we lose a hard drive You don't lose your data and then one of the questions that I get asked often is that you know, how do you how do you guys destroy the hard drives? Do you have a special process? Are you you know throwing it in the back of the data center and then you know eventually Moving in somewhere else or do you have an actual process? Well, we do have an actual process from the Department of Defense in the US It's a three steps process pretty easy first one de magnetize Second one with real holes at regular intervals just in case and then again just in case third part of the process We just shred them So then you can grow new hard drives, right or eventually separate Metal parts from the rest This storage the EBS storage is something you have to pay for but there's another option that is free We talked about the 3.3 million IOPS for the a3 family those are actually the hard drives the storage attached to the hypervisor and For the a3 family it again 3.3 million IOPS up to 15.2 terabytes of storage, but those hard drives are free Good point. They're really performant excellent point But the storage is not persistent. How come right? There's no such thing as a free lunch This is the thing that I grew up with so there must be something Well, when you start your instance you start your instance on a hypervisor and eventually if you Stop and then start again your instance You have more chances of winning the lottery than running the instance on the same hypervisor again So for security and of course privacy issues We won't copy the data from one of the hard drives to one hypervisor to another so you will lose this data However, if you can work with that if you can cope with that Knowing that this storage can be used for maybe temporary files Transformation of files, maybe video, maybe other cool stuff that I don't know about This is really really fast One of our customers Netflix you guys may have heard of them They're using more than a hundred thousand instances on AWS And they're not using any more of the EBS drives because of the costs and because of the performance compared to Instant storage and instant storage again is really really fast. I mean this is you will see that in a minute really really fast and Then it's free, but then again you have to work with that So this is Julien's part Okay, and thanks for all this information on instances, but you know They have very different specs and we want to know how fast they are on real life Workloads, you know benchmarking is awesome Synesthetic benchmark can be useful up to a point but at the end of the day you want to run a real workload and see what happens Okay, so that's what we're gonna do. So I've picked the largest C4 the largest X1 and the largest I3 And here are the specs again and the setup and we're going to build the world on free BSD and see what's what right So I'm using 11.1 release which is the the AMI available right now in the marketplace, right? And I think it's faster now with the latest You know the latest versions, but the AMI is still 11.1 Okay, so I wanted to run the test that anybody can replay in five minutes or so Okay, so C4 as you can see as a bit of memory a few cores Network storage, but I'm using provision IOPS. So I should have a reliable IO level there I'm using UFS as the file system X1 has a ton of memory a ton of cores an instant store So I've got about four terabytes of local SSD to build on And for my I3 I've got quite a bit of memory too a few cores that new generation of SSD called NVMe and Well, since I have eight volumes, I figured I might give ZFS or ZFS or ZFS a try I'm madly in love with the ZFS. So that's my excuse for trying it. And so we're gonna build And see what happens and I also did run those numbers with a RAM disk and we'll see what's happening there It's interesting as well, and we'll talk about the price The hourly price of each of those instances Okay, so just a few more details before we actually do this. So once again, I'm building on those three instance types For the DX1 Remember, I've got two local SSD. So I'm using one for USRS RC use one for a USRO object And I'm gonna use all available cores to build on the C4 I'm putting both directories on the same network volume with the 10k IOPS That's the simplest way to do it. And again, I'm using all cores and for the i3 I'm creating two ZFS pools For SRC and OBJ and using all cores. Okay, so let's do the setup real quick launch the benchmarks And then Nikola will go and explain how to build the MIs All right, hold on one second Julien, hold on one second just for the thing to refresh. I want to see your terminal Show me your terminal Probably so No, no, no, not the region. There we go. I'm running my three instances there and I'm going to SSH to each of them which Should already be the case, right? So here's my C4 Is my X1 here's my Okay, so in the interest of time, I've got all the comments ready But I don't think you're gonna learn anything here, right? Most likely you're gonna fix my fix my commands So for the C4 I'm just doing this right. I'm just extracting sources, which I think I don't know that already And and just go and build world on 36 cores. Okay, so we can just go and do this Just make sure this is the right instance Yeah, that should be quite fast. So on the X1 Actually, I can see right. I can see my two instance store volumes here, right? XBD1 XBD2, okay? So I'm just gonna you know Format them mount them. That should be fast. That's the X1, right? Okay Okay, and yeah, go C4 all right Okay, and now I can do pretty much the same thing extract sources and build right and Yeah, maybe you want to see that thing actually happening, right? Well, of course All right. Okay. So C4 is Is starting to build and then on the I3 Same thing. I can see my I can see my volumes here my 8 NVMe volumes Here they are, right and I'm gonna quickly build My pools, okay All right, so X1 is building to Okay, and same thing here Okay, so I can see my two pools. We're ready to go Extract the sources and build okay, so this is gonna run for some minutes right And I'm just want to make sure this one is actually starting before handing the mic over back to Yeah, so they were on their way here And yeah, this one I guess Okay, and this one is starting to right? Okay, so all three instances are building Again, let's show you Let's show you those what's happening here. Oh, yeah Just want to show you those specs once again so that you can decide Which one do you think is gonna be fastest? Right, and you know just all right. Don't lie to yourself. Okay, just pick one and don't change your mind Okay. Yeah, let's okay. Who's going for the C4? Yeah, come on. Just go ahead I mean it's gonna be closer. It's gonna be closer than you think anyway, right? So okay who goes for X1 who goes for the I3 Okay, so you don't you guys don't believe too much in the C4, right? Okay, so and the rest is pretty much split. So yeah, it's like Yeah, most people are you know, they're split between X1 and I3 Yeah Okay, but the brave soul I said the C4 would win. Okay There's always a brave soul. Okay, so that's fine. We need brave souls in this silly world All right, so it's it's it's going right Okay, so let's switch back to slides And we can check the results at the end. Okay. Yeah, and Nikolai is going into The process of building BSD open BSD right open BSD MIs I'm a free BSD guy like you understood, but okay, you know, come on. We had to be You know, we're brothers, right? So your turn. All right. Thank you, Julien so Well, Julien's build world is going on for all of those instances. We'll Just talk about building BSD AMI's I think Laurent in the back of the room right there. Raise your hand, man Yeah, he had a presentation yesterday a very nice presentation on how to build stuff with console, right and open BSD and So part of the stuff that you use probably Could be taken care of from the marketplace. There's a lot of things available there But what I want to talk about Today is to use a few tools other than just console But maybe in complement of console like Packer, for example, which is from the same company Maybe some other tools like the CLI, which is the command line interface or the shell CLI Which I really really like about AWS or eventually a minotaur for other OSes, right to build your own AMI and so There's one One template that we've we've shared to build your own CI CD pipeline And this is the idea behind what I want to show you is how we can bring Some of the stuff we do manually on open BSD up to CI CD and then speed up some things and maybe check some more stuff and Maybe use manage services. So for those of you who don't know what manage services are It's just same thing as what you're doing with the hands, but with no hands, right? It's usually cheaper. It's usually as secure if not more secure and It's in a Payment model in pay as you go So if you consume it just like turning on the light in a room You pay for the lights if you turn off the lights You don't pay for the light anymore and that's the idea behind most of the managed services on AWS So we're going to build an open BSD AMI factory We're going to have a host which already runs open BSD and has about 12 gigs available So some room for the AMI that we're going to create plus about four gigs of temporary files, right? Sounds about right, Laurent Yeah We're also going to use the create AMI script from Antoine who basically Brings everything together on a local file system and then with a little bit of magic Pushes it to a storage service on AWS called S3 again There will not be any S4 S3 stands for simple storage service. It's one of the oldest AWS services I believe it's like 12 or 13 years old and In the Ireland region, it's one of the Coolest services that I use It's about 2.2 cents per gig and it can also trigger notifications upon the arrival of the file So if something happens then poof I can eventually use this notification to run some code Maybe you know do some modification of my infrastructure again Just as Laurent showed you in his previous presentation and so here what I want to do is this I Went to commit my code and then eventually trigger a service called lambda lambda is a container Managed service that runs your code in Java JavaScript Python or C sharp upon notification It runs between a hundred milliseconds and five minutes of time and the first million Executions of that code or the first million execution of lambda is free forever the second million 20 cents Pretty cheap right and so here I'm just needing to run this for maybe a split second to notify my open VSD host here to create a new AMI That includes the code that I committed and then eventually notify me saying hey, you know the a new AMI is ready Maybe you can use it with something called code pipeline code pipeline if you guys know Jenkins It's about the same thing, but in a managed way and very API AWS Developer services oriented So once we'll have this notification code pipeline can then trigger another service. Sorry many services called cloud formation Cloud formation is one of my favorite services again, just as lambda and S3 but not my favorite one and then Cloud formation basically allows you to describe your infrastructure using either JSON or YAML Whichever is your preference your preference? Okay, just like asking spaces or tabs something like that We won't do that today. I promise Yeah, yeah, absolutely And so Cloud formation will then take all of this information scan it Identify the resources needed to be created first like network like security stuff And then the resources that it can create in parallel And so you can create your entire infrastructure really really fast this can be used for many things I want to deploy my application in UAT or I want to deploy my DR with you know, all of that stuff and The cool thing about it is that every time you Create something it creates a stack that can be either updated with different behaviors or Well Main advantage form from Cloud formation is that it creates idempotent stuff So the very same thing all the time things that us humans are most of the time not as good as You know services at Yeah So once we have this done Cloud formation will deploy the application in UAT You will then be able to run some stuff, right? Some of the stuff that we can run maybe security slash compliance tests. We have a service for that called Inspector if you guys know Nessius, this is a managed version of Nessius kind of thing we have a different set of tests or Books of tests that have been created and some of them are quite interesting PCI DSS compliant for example So to run this on your application, maybe you want to do load tests Really like the name of this one really cool. It's called bees with machine guns It's a tool from news corp if you guys know news corp Pretty good pretty cool company and then maybe some other stuff You want to maybe test load and security at the same time to know if your application behaves the same way With the full load or eventually more than expected load And then maybe some features is it still working do I have to have manual intervention or can I do it automatically? So with that Then you have some results and then you can either feed that back to the developer because hey You missed out on something here or maybe the percentage of comments versus the percentage of code is not, you know good enough and Then eventually things will go well and then you can move on from UAT to production This is the goal and then you can use the blue green deployment methods For example with the blue the existing environment and in the green the new environment So to switch from one to another without interrupting the customer's experience It's one of the goals, right? I want my stuff to work so that said Think it's this one. Yes. It's rare where I start to do some some demos. All right, so Did I do any chickens enough chickens today so that my demos will run well? Okay, so let's Yeah, stick this one bloop and then this Move on to here And then SSH to my open BSD host. So as you can see this is 6.1 and then Sorry minus age a Little bit of stuff going on here. I haven't cleaned my stuff since since this morning actually since you know an hour ago Yeah, just in time delivery, absolutely So I Could do this all automatically, but I want you guys to see how things are working and Do you have open this in atom already? Sorry about this. No, this is yours. Haha. There's a lot of Oh So love information we want you to see there we go And this is the stuff that I'll be Loading so I'll be exporting some stuff and then I'm going to add some mirror I'm showing my keys. I don't like that There we go, I'm going to set a mirror for Ireland as my machine is running in Ireland region then I'm going to add Some cool stuff clone my repository make some modifications Somewhere here and then generate the AMI. So let's cut and paste. There we go Please don't take a photo of my my keys that would help Yeah, okay, hmm, let's remove those keys real fast So let's run this. I'm cloning some stuff All right Getting the script and creating the AMI so you've probably run seen this already Creating the storage creating all of the stuff that I want to and then once this will be done I will be notifying the rest of my applications in the pipeline via service called SNS for simple notification service, right and It can send a lot of types of notification. The one that I'm going to use is Signifying the end of a task to lambda so that lambda can run with the rest of it So as we're building This is something that none of you guys have seen before right except my keys As I'm building this is one of the things that I want to draw your attention to this process is Working, it's a great process. However, it takes a little bit of time. It takes a little bit of time because Well, some of the tools that we're using are not up-to-date Some of the drivers maybe that we're using are not giving their best And maybe this is one of the things that we might want to require you guys For some help help us make it better on AWS and there's a lot of things that can be done We have a slide later on for free BSD as well as some of the stuff that we're Needing some help on Yeah, there's a lot of people running a net BSD in Australia on AWS And there's in the US and in Europe in Canada as well. We're starting to see a lot of a lot of open BSD Thanks to Antoine and rake. Yes that helped us All right, so this is being built. Let me go back to the slides real quick There we go. This is it So we're at this stage right here. We're pushing the notification here and then we're building this once this is done The notification will go here code pipeline will trigger launch cloud formation and deploy the application So cloud formation is quite easy actually once you get used to it But this is only for AWS one of the services that you know is dedicated for AWS However, if you want something that is more platform agnostic, right? You guys have probably heard of a tool called terraform, right? And this is pretty cool because you use one DSN and then you plug whatever you want Behind it and start running about the same thing again apples and oranges in those different environments I've seen a lot of customers doing this with VMware Because you know it was already there and you need you know, you bought those very very cost costly licenses So you need to maybe use them at some point And so yeah by API interactions by cloud formation, you can do stuff inside of AWS or outside of the AWS I could be talking about chef maybe or puppet or unseable or salt where once your application is deployed You have different configurations between UAT and production. So maybe you put everything that is most stable quote-unquote or most non-moving parts into your AMI and then the moving parts you can add them Later on once your once your AMI has been baked And once your AMI has been used to deploy or and create new instances in your environments And maybe you can modify the configuration and then those configuration management tools are really good To do this at a large scale Because let's face it. I'm doing it with a few instances here But maybe you could do it for a hundred a thousand a hundred thousand instances Maybe at the scale of some of her largest customers, right? Yep Yep, you can do that. Absolutely. That's a great question Again, most of most of the stuff that I'm doing is just a one-shot thing just to show you how it would run the first time But later on you will have to maybe maintain more AMIs maybe more Versions of your application and think about it one AMI per version of your application The boot time of a machine is what one to three minutes for most of the unixes Quite fast so you can boot that and deploy that with the template that was The version of your application equals a template So you have a template and an AMI you can deploy that very easily and each and every one of the AMIs that you create has a unique ID So it's AMI dash something something something something hard to remember most of the time So you'd have to build some tools maybe with automation with the CLI with some scripts to have some kind of management of those AMIs And also good point you pay for the storage So as you create more and more AMIs you may want to have some automated way of recreating those AMIs very quickly and You know make the decision for cost as well, right because storage or image generation time is going to take More or less money. So you have to take something that is tailored to your needs Okay Does that make sense? All right So while this image is building it's taking a little bit of time a little bit too much time actually this is why I was asking for some help It's right now. It's about 20 to 25 minutes to build an open BSD AMI it's still reasonable by Automation it's pretty good, but we can make it faster. We can make it a lot faster So for that drivers and tools are your best friends Back to what I was saying earlier once once you you've committed you beg the AMI you notify your teams and code pipeline You deploy in UAT environment and use this new AMI you test your application And then once everything is satisfactory, then you move on to the next stage. Yep This is what we're doing This is what we're doing. Maybe Lohan's technique was a little bit more faster than mine a little bit faster. Oh, okay From your experience, how long does it take to build an AMI from Vagrant with Packer? Yeah, sorry So with different tools see we can split the building time in half. I'm sure we can go a lot faster I'm really sure so different techniques, maybe different results and just like You probably need you know, you don't have to pick one or the other. I mean sometime most of the time You you want to be on the stable on the stable OS and you just want to maybe you know rebuild the MI and add extra stuff And when there's a new version coming out, then yeah, maybe you want to reveal completely from scratch So it's a combination, right? It's a combination and and most of the time You're just going to be deploying your app anyway on the latest AMI that you have so that's gonna be really fast so it could be just deploy the app just you know Re-do what you were doing start from a stable OS and build my MI or rebuild it completely all three makes sense at some point in your development process All right, so the takeaways from this is that DevOps is for AMIs, but it's also maybe for containers You've seen probably the process resembling some some other stuff like maybe Docker or things like that Try to use services instead of servers, right? So to you know make up some more time to more To experiment more things more services, right and this is clearly going towards DevOps I know but this is the way you're using AWS to the most agile way to use AWS Security again is something very important again when you're Granting those services to access different parts. We have a service called I am that I didn't show sorry lack of time Identity and access management which handles users groups policies and roles which can be assumed by different services or different resources to talk to each other and Along the way when you're going to build the CI CD pipeline, you're going to have to use roles and make sure that you use the Least amount of privileges for the right amount of actions Okay, and then last but not least one of the advantages is clearly to pay by the usage of What you need one of the services that I didn't show you is called code build that can build your code and you pay by the Time of execution and a number of of builds So quite interesting as well instead of maintaining everything together managed services can help you with that with that I'll give you the remote Before looking at the benchmark results, this is how you can help so if you're involved in the free BSD Sorry the open BSD community then helping us improve the speed of those scripts is definitely top of the list so please get in touch and if you're If you made the right choice in your life, and you're actually using free BSD then oh It is the free BSD rooms here. I told you Yeah Yeah, Colleen is our is our hero So yeah, thank you so much for for the hard work on on building those AMIs and It's part of the free BSD core team as you know, and it needs your help on testing free BSD on AWS I need your help on writing documentation, which is always So important and sometimes you know the the most difficult part of open source So please help out and any any help that you can provide also on you know having one click Instant everything for free BSD would be very nice Today we have the MIs, but we would love to have no proper packages proper MIs ready to go We know with WordPress and whatever people like to run on on free BSD. Okay, so there's plenty of ways Yeah, everybody loves WordPress and So anything that you can can do there would be much appreciated right so he's there talk to him That's the email address flood him. He needs your help and you know, we want to see free BSD much more on AWS Okay, let's look at the benchmark results. So Okay, all right, so let's just look at the numbers I run those tests again yesterday C4 is 11 minutes 42 seconds Right So keep that one in mind This one is X1 1138 and now all the guys say yeah, I won I knew the X1 was faster and I3 is under 11 minutes Right So And it's it's it's it's fun because it's pretty much, you know exactly the numbers I did yesterday so So this is what we get right So I3 wins Okay, and that goes to show a few things that goes to show And VME storage is just insane right? I knew you know, I knew it was gonna be fast But it's blazingly fast and you would think you know my guess when I did this test was X1 is going to destroy everything, right? because building is all about CPU, you know CPU, you know Blood and fire and and flames and skulls and it's just you know the biggest baddest CPU wins but no and My guess is when I actually I spent a few hours looking at this build process all over again is that The and that's no, you know, don't take offense in any way But the the free BSD build process is just not parallel enough to actually to actually leverage those 128 CPUs, you know, there are lots of sequential steps That are just running on a single core and you waste a lot of time doing that But I don't think I'm not sure it can be helped but you can actually see most of the time, you know You just don't see parallelism On a lot of steps and I think you know, that's where you would win. Yeah Yeah, that's when you where you wouldn't make a lot of speed and you don't yeah Yeah, but that's exactly that's exactly my point. We're not using them, you know, I could have No, no the thing is I know so the so we're using I don't know I don't have what the max. I don't know what the maximum number is But you know, we probably end up using I don't know at any given time Maybe 40 or 50 cores in parallel, but we never go as high as one and 28 right because you know, we'd yeah, please So I actually ran these benchmarks this morning because I knew there were going to be tests here Actually the the X1 if you only run with 64 parallelism, it takes 10 minutes and 39 seconds We actually have issues with kernel locking. We're just spending too much time with contention There's there's work in head which improves that With a head kernel compiling 11.1 We can do it in eight minutes and 31 seconds So there's there's definitely progress happening there But it's it's a scalability issue in the kernel not just with a build process. So that that's what I refer to earlier I mean, I'm using 11.1 release because it's the official AMI right now, but yeah, it's gonna get faster So it's you know, it's pretty interesting to see that Not the biggest instance wins actually so I run those same tests exact same parameters on on RAM disk Then this is what I get. So I have a minor improvement on C4 I have a yeah a small improvement on X1 and I run this repeatedly and actually I get slower with the RAM disk on i3 And my only conclusion here is that FS is just brilliant But that's probably might fall for today and the last thing is How much do these cost right, you know, we pay per hour and soon we're gonna be paying per second So here are the prices, right? Yeah, so Again, it goes to show one thing is that Performance is very nice, but at the end of the day Even if you're gonna use i3 right, would you be willing to pay For yeah, almost. Yeah four to five times the hourly price just to gain a few seconds I don't think so and so that's the advice we give to customers all the time again and again and again And Laurent will agree with that that when they ask us I've got this workload Watch instance size. What's instance family? What instance size should I pick? The only reasonable answer is please run your benchmarks. Please run the actual application And figure it out. Okay, and then you get that performance level And you decide how much you want to pay for that. Okay, so if absolute speed matters and maybe not for a building Right, let's agree on this. It could be another application. You could say yeah every second counts. Okay I'm paying that premium here, but I guess the right price point here for this use case would be C4, right? So run your benchmarks again, you know synthetic benchmarks are nice But the real testing should happen with the real workload and then you know, you can see what happens, okay? as a conclusion we talked about BSD today, but Actually, you know, this is just the OS, right? It's important, but it's just the OS and then all of our customers on top of that All our users they run a crazy amount of open source, right? and you know from databases to no sequel to Hadoop to Yeah, puppet and chef and Jenkins and all the CI CD tools, etc. And actually all of these One way or the other work really well only WS some of our services are even based on those on those pieces of technology like Amazon RDS for relational databases where you can pick from my sequel and postgres, etc MariaDB and so on. So, you know all of those one way or the other We help you run them on your OS. Okay And let's face it. Yes, sometime it's running Linux, right? But so We help BSD run better on AWS, but we don't just stop there, right? We really want to have as many open source projects running very well on the AWS So keep that in mind and you know, feel free to ask questions later on or get us on Twitter and right And pretty much that's my conclusion. So thank you very much for listening And you know, we're really we're really looking forward to have more more free BSD and and maybe a little bit of open BSD and that BSD run as well on the WS. Thanks again And if you have questions, you know, we'll hang it around. So please show up. Thank you