 Thank you John. Good morning, everyone Thanks for being here despite the fact that the keynote might run a bit out of time. So Yeah, thank you for being here and We are here from Deutsche telecom to systems talking today about image built as a service So why does it make sense to to build your own images? And before we start a few words about ourselves what we are So hi welcome everybody. My name is Daniela Ebert. I work for two systems since 2003 in different areas Worked as an AIX engineer before now focusing on on linux and building images and yeah, I have joined at the open telecom cloud team two years ago and Yeah, I'm now for totally focused on our open stack cloud and building the images on them Welcome my name is Scott Galloff. I got a bit of a history in the open source worlds been doing linux for Sometime started as a colonel engineer Working with Susie for many years Um, yeah, I did some some engineering and engineering leadership there Um open stack is something I've been doing since 2011 2012 in San Francisco was my first summit and for some reason Lauren put me up on stage for a keynote which Kind of was intimidating at a time But it's it's been growing since then significantly still so it's it's great to be part of this community I run the architects and community team inside the open telecom cloud Project and I was a bit involved in the image factory as well. So I'll talk some bit Yes, my name is Sebastian venner. I'm also architect for the open telecom cloud Um doing open source since we are about 2000 And focusing also on cloud technologies Since the last year around about five years Helping t systems building up open telecom cloud and also being part of the the image factory team Good What are we talking about today? So Yeah, intro Why are we doing it? What are our requirements? So why did we build up this whole thing? How does the setup look like? So how are all these little gears fitting into each other and then doing their work? What is the workflow from a requirement to have a image available on our cloud? And also doing an outlook Where we want to be End of this year next year. What's up? So what do we want to do? Yes, some Introduction so giving you a bit of context what we are doing and and why we are doing public cloud so Deutsche telecom might not be the the most or even t systems might not be the the most famous Public cloud company But if you look at our portfolio, what we are doing with private cloud with hybrid cloud Public cloud was the the missing piece To our portfolio what we wanted to offer So Doing that in germany under german legislation following german data protection laws I think we are able to fill in a gap And we heard it in the keynote that compliance is really a big thing that you need to to worry about So having a secure cloud where you can run your workloads Is a very important topic that we we are focusing on Simple of course, I mean you all know open stacks. So Despite other solutions that are out there in the market I think we we rather build a a simple and easy to use solution Affordable we we are even running below amazon price wise So making also public cloud affordable And open with all the apis what is around there to ease the use of of our solution And all that we we are offering to the market as open telecom cloud And don't worry too much It's only one more slide about marketing and then we we do a dive into the the technical stuff what you're Interested about so just giving a bit of context As said, um, it's really making a secure cloud Where you can trust no third party access Administrative access to it. It's all run out of europe With all the deprivacy laws that are protecting your data running on it So, yeah, I guess that's it Reasons why are we doing it? Um, yeah, maybe one last sentence We we discovered that there are only three types of of cloud images that you can offer Either they are too small they are too big or they do not fit So and that is what we we found from the vendors and and that's the starting point Why we built this this whole thing our image factory and then court is explaining some more details What are the reasons? Let me give you a few more Detailed reasons why we actually took the decision to produce our own public images First is one of the the key messages we do have is this security story We need to offer secure cloud and then obviously in an ias model the The virtual machines they belong to our customers. We don't touch them But what we do want to do we wanted to help our customers to create secure virtual machines And actually our security folks But still com has pretty impressive security folks. They can stop and kill projects and Put very high requirements that make projects never go to market We were actually working with them and they came to us and said well, please please Use what we learned from other platforms and preconfigure those images to be very secure and to be hardened And we actually took the input Obviously wanted to be also friends with our security folks And included the the learnings they had from how to harden images Then of course, that's half of the story the other Most important piece probably for secure public images to make sure The way you build them you have transparency, you know exactly what's in there There's nothing in there that you don't have the rpm of that you don't have the sources of So we use a process that's transparent All the input that goes into an image is published So we have this image factory website where we do not only publish the qcar 2 files But also all the the profiles that govern the build process And the scripts that go to do that. So you can actually reproduce them. We build them So this this adds a transparency and then last but not least A lot of our customers Well, either way they want to stay secure either they redeploy new images constantly Because that's the way they manage their platform their application Then they can just take the latest image and we provide updated image Every other week So you can have all the latest patches without having to manually reinstall all those patches at the same time You also have the package mirrors That have all the updates mirrored in our environment. So you can also just do A patch update if that's what you prefer. So we kind of Support both models Second group of reasons We decided we need our own images Was the fact that there are specific things we need to do on the driver side We're sitting on an unusual I should say these days hypervisor Most of the world has moved to kvm We are using xen that will change in the future at some point in time will open up to another hypervisor as well But currently still most is one young sen and there were some images that did not come with the right driver So we also had to have the ability to inject the drivers. So we did a bit of work to do Builds our pms that encapsulate those drivers in a clean way. So upgrades don't break them And included them into the images and then later on we also had special special flavors that have high performance networking Gear that needs special drivers as well. So we then of course we use the capability To inject the drivers that are needed to take those performance benefits that those bring Third point on the kind of platform support side is we also have a monitoring tool some of the Data that you want to use when you do monitoring you can best collect from inside the image Obviously cq cpu utilization you can easily measure from the outside the hypervisor has all of that information But for example if you want to know if your file system runs full That's something that the hypervisor Should not care about and should not know So we have this agent that the customer can easily enable and get the monitoring on that And then finally there's a number of things we did to pre-configure those images We want them to have the same experience whether you use Centaurs or oracle linux or redhead or fedora or open suze or slash And we want to make sure those images behave the same way So they all when they come up they all find their ntp server They all find the dns result while the ns we do by dhcp Anyway, um, but at least ntp we we pre-configured in the images. Um We have the same username And the same way you can access the the image y and ssh Key that's injected by the cloud in the standard cloud in the mechanism cloud in the pre configuration obviously is also part of this So that's that's kind of the the reasons um And those reasons drove the requirements. I guess I Covered most of them already. Um Those images, of course, we needed to make sure they are supportable maintainable They are they are secure and stay secure. So we have the security hardening and the Security updates applied to them all the time Also, one of the things obviously that we do in building the images We we make sure that all of the packages we inject to build the images are integrity checked So we check the rpm signatures and we have a list of keys that we trust that come from the operating system vendors And the one repository that we have in the open build service where we built those few custom packages that we need And then one of the things we do is um Coming from an enterprise company, there were actually some images available from other departments When we looked at them we we just said well, those are not cloud images They were gigabytes in size and that's not what I expect from a cloud image I expect those to be small to come up very fast and in the customer To be able to inject whatever is needed in addition to that wire cloud in it That's the way we our vision of how to use cloud images looks like so we have built like small images Um where you can easily then just inject all the configuration and software that you need Um transparency I talked about um And then um one of the things uh we delivered in the second step Is that we actually developed put a significant effort in a test suite So all those images after the build process uh has run through um They get booted in a secure environment and then a number of tests are run against them and uh this way we ensure if we do changes to our Chit tree where we keep the configuration of the images we don't screw up So the image build happens constantly and then um every morning I look at those test reports and see if something has gone wrong And then if some alarms pop up and we need to kind of look at whether we have screwed up or whether something's wrong With the the build system or the test system um and then yes, of course we published them um by um Putting them on the on the website the image factory website uh with gbg signatures So you can also make sure it is the one we built and we test it Um, let's talk a bit how the setup looks like and how we're doing it and I'll hand over to daniella Thank you court So before we started we we had a look at the tools which are around and we first decided To use the open stack a kiwi tool um Yeah, which to help us to to build the the cloud-based images this this tool is simply um, yeah Pulse packages from the repositories and installs them in a change room environment um First of all this was just sufficient sufficient sufficient for us because it built an open suzer slash sento as um oracle linux red hat um But in the second step when we wanted to build debian images, we um decided to go for a second tool Which is disk image builder? Um, so we use these tools and In the same way. It's just like that One tool will not help us to build all the images, but as we have a setup which is quite quite modular um It's works perfectly together and we are using the same environment. We are using the same scripts To and just leverage the tools Besides these built tools, we also have some tools around of course all the template files all the scripts are in a local jit repository um We have an automation which is basically best scripts Who triggered the workflow? and We use open stack tools Like plans to register the images and have an Apache at the end who publish our results and Who shows what what we have? So now now I come to um our build architecture We decided to set up Our image build factory in a normal tenant on our public cloud means We are not just somewhere in the back end doing some secret stuff We are a customer on our own cloud Which is um quite good because we See some problems earlier than our customers now, which really really helps So What we have is a we have a jump host Where we can access the the whole environment And we have a web server publishing our results. These are the ones being public available The jump host is also an s nut instance doing all the the outgoing traffic So the other ones like our build hosts and our Data hosts and Are not public accessible and they do not have any public IPs We have several kiwi and disk image build hosts just to spread the load and Yeah, they are talking to internal repository servers. We have mirrors like an um susie smt or an redhead enterprise update infrastructure or some up caches So this is um also for our customers, but we are also using this To get the packages to build the the images After the images are built We upload them to our object storage In the cloud And we also have a connection to to the client server To register them So let's let's have a look how um How how we built the images or how the workflow is So as I stated before we um, we have a jit repository So we we have a jit repository which which holds our template files um, there we also have um a keys to to to sign the the images later and um, Yeah, we another input rdrpms from from the repository repository server as I mentioned earlier So this is what goes in kiwi or what goes into disk image builder um And yeah, they are basically building the images and we doing some fancy stuff around like collecting log files and assigning them and Writing the log files somewhere saving the previous results. Um, so Things like that After the the images build, um, the next step is to to upload the image and register it We um have a two step Approach we upload it to our object storage first and then register it as a private image So these steps are everything what every customer could do as I stated. We are a customer on our own cloud um If this was successful, um, there is an automated test suite. This is um Yeah booting a vm and and starting a bunch of scripts To check whether the image first of all is bootable if it is accessible the ssh if Several configurations are done if all the drivers are in So basically ssh scripts checking the configuration and If the image is well, um, yeah, if the test was successful the, um The result is being published on on our web server as kukow2 files. So they are public available Let's say that's our pushback to the community. So if you like, we'll see some links later And yeah, we also have scripts to to register that in clients then So that's that's basically all um I was thinking about bringing a Demo, but I think that would be quite boring Seeing a script running 40 minutes. Um, and then it just a kukow2 file. So, um, I just brought some screenshots to Where's my mouse? There, okay Um, how how it looks like just to give you an brief overview Yeah What it is how it feels how it looks like um, so the The input is, um, yeah, just a simple, um xml file. So that's that's a kibi xml file describing Yeah, what what you like to get what kind of image you like to get what drivers Where's the mouse again? Here what what drivers you like to have? What what packages to include what? Additional software python clients You like to have in your image and so this this goes into kibi Then we we started a script which simply calls kibi and you see here that's that's kibi output Yeah, that's not programmed by our own It gets the repositories. So they are so the channels and Sets up the change route environment So that's not very exciting. Um, unless you have an arrow And we hope we do have no um and Yeah, some some more kibi output is that there is a conversion to kukau 2 And then you hopefully get the the success result And and a log file where you can get even more details on which rpms are in um I think court has mentioned it already. We also store these results on the web server. So it's uh, you can See what's in and how the build process was done That's all in the log files on the web server as well So, you know what you get um This is now Not kibi anymore. So this this is of our own own stuff our own automation It it simply is an upload to the obs and a client's register And as we see here, this was successful. So We we can go on to to the next thing So that that's a whole flow. So because the last one was successful We see here here a nice output of of the checks we do These checks Were we started I think was five or six and then we discovered some errors with The cloud in it configuration and the data sources and some interface issues And so this this tests we grow and grow depending on what kind of errors we saw and Yeah, at the moment we have 30 40 test cases and they are growing even more Yeah, and if this um Test is successful, then we are going to um Oh, yeah, we are going to publish It at the website and also at the client server Maybe also interesting. We do some update tests and we do reboot tests Just to make to make sure that that really everything works fine Yeah we started um as I stated with with kibi and The the the primary motivation was to have um Images for our lounge at cb 2016 and there we decided to to go with the most common images So we're building open suzer Slash sent us oracle linux And I think that was it for the first step and then the second step. We went for red hat debian and fedora um, so We do that regularly and The only image we are currently not building but is um in our cloud is the Obuntu image it is provided by canonical where due to some Restrictions not not allowed to to build it by our own and uh canonical Um, yeah provides them to us and any changes need to be aligned with them That's just a different view so um after we've published them that's just um A screenshot of our dashboard where you can see In the public image What what kind of images we have here the open suzer to debian the fedora And yeah, we usually publish them every four weeks Unless there is some Why security reason to do it? We would be able to do it daily, but Yeah four weeks is is the regular publish so Last lot but not least where we would like to give you an outlook because this is what we currently have but we're From our point of view not at the end and we want to improve and um want to bring new things and into our image factory and I think that's Good Okay, so let me Share some ideas that we have um how to to further improve this and make this more useful For our customers, but maybe also to to other cloud providers Actually, we know about one cloud provider who is using our images as well so We've had the first success in that already sharing this image factory is live now for more than a year I mean we used it for for the launch at 2016 in march um There's a bit of technical depth that we have collected. I mean we we did not have a lot of time to design this when we did it initially So if you looked at the infrastructure, there's an NFS survey in there, which I don't like There's also some long-running VMs in there, which I would rather create on the fly and have them short running and then One of the ideas that we do have is we have some customers. They come to us and ask Well, we want to have a specific image with those specific configurations and those specific packages Pre-configured and I mean our standard answer is yeah, well just use cloud in inject them at boot time and that's fine but For some of them it's not enough. So one of these things we are thinking is Okay, we could also offer image factory as a service allow customers to kind of Have their custom images built by our image factory And when I say built then it's the same methodology It's not just you do them once in some manual fashion But you have the profiles in place and build them nightly and have the updated version every morning if you like um And that that's what we do with our public images currently even if you only published them every couple weeks I mean we could we could publish them every night if you like to so If we go down that route some of this infrastructure would need to be improved specifically for security reasons because currently We have an engineering team that fully controls what goes into those images what keys are trust what software runs in there Obviously if if we allow customers to inject configuration things we don't necessarily trust We need to isolate the build environment more than we currently do. So it's another improvement We would need to do Then of course there's um additional images with additional content. We're currently discussing having this mechanism in place. We can easily Increase the number of images without adding too much load to our engineering team One other thing we're doing is we have built up in our test department Some more infrastructure to run test jobs via Jenkins. We're currently not using that We're just having simple best script. So at some point in time, we want to use the infrastructure and put it all under a common uh pain of glass So you have the kind of visibility from the Jenkins dashboard for all of them um I guess that's some of the Improvement I had in mind you have you have some more I guess. Yeah, I can tell a bit more about the the windows integration. So um Last summit in in Barcelona. We had some great talks to the cloud base in it folks Not here So about their work that they are doing on the windows side And some of our windows engineers also picking up that work and and doing a windows integration So by now we have a also automated windows build Which is standalone. So it's not yet integrated into our overall windows Or in image factory framework, but it's more in an island That is producing our windows images It's also automated by now But it lacks the the overall framework support that we can offer in the image factory So that will be one of the next steps to move also the the windows factory Which is currently building only server images. So 2012 2016 and I think we are even doing 2008, but um Not something I want to talk about Yeah, so that needs to be moved into the the overall framework that we we we want to have there Also thinking ahead As you saw also in the keynotes kubernetes and then Dockerized images is really picking up speed and then getting more and more Demand and and and wait In your daily work. So not only building images for for infrastructure as a service But also for container frameworks so that we have docker images for Whatever application that we want to offer Will be one of the the next projects that we are looking forward to to integrate into the image factory Something I missed No, but maybe adding to that. I mean This effort has been mostly driven out of systems engineering so far using lots of open source tools, obviously, but One of the things we're obviously looking forward to is Talking to to the community talking to you guys We would be very happy if people want to join this effort and maybe see well Can we contribute to this? Is this something that could be used also for images outside of our own cloud? I mean, I told we know this one cloud. It's already using our images We're very open to to collaborate with others and making this more open and others to to reuse some of that Work and build on top of that and and make this project even even stronger Good Yes, that's it Yeah Questions Okay, so I'll just repeat the question for for the benefit of everybody The question was how we deal with old images. How do we version them? How we kind of Deprecate deprecate old old versions, right? Um So there's there's two sorts of images we publish we have once where we attach a date stamp And those we currently keep rather long So we have not yet deleted one of them That means if you build and use one of those images you can rely on that still being available in a year from now And we're doing that because we have customers that want that And that's also the reason why we don't publish a daily but publish only every every four weeks or so Because otherwise that list would would grow too much Over the long term. We kind of need the strategy to Maybe don't delete them But hide them from customers to not confuse them So customers that have a reference to them can still use those old images After some point we may want to have a discussion with those customers whether they are sure they still want to use that very old things and still Have that in a secure way, but um in generally would Storage is cheap. That's not the real issue. The issue is confusing customers with huge lists The other kind of thing we do is we have one latest image all the time And that one actually sometimes also do a lot more often than every four weeks And that is that is a moving target. So you can always reference the latest image You can rely on that's a tested well-working image But that that's then between Well an hour and Maybe a few weeks old. Yeah So that's that's the way the way we currently do it We're looking forward to this community visibility blueprint being implemented It would help us with hiding images without deleting them There's a blueprint indeed in the glance area that's being discussed now for years Go ahead So the the question was um He's he's doing the same thing having latest images But when you register them they get a new id and that kind of screws the users And the question is whether we have the same issue. You want to answer? Yeah, no We do have that same issue. So in in the um talks We basically tell customers if you want to use that latest image don't reference it by id reference it by name And that's the best we have come up with so far Um, if if you have some ideas about a better solution, we'd be happy to learn There was one more question over there um The question just to repeat was about the the windows build which is not yet integrated into the Image factory and how to manage the the licensing issue. Did I capture that? Yes, um, so licensing especially for the windows stuff is always um, yeah, not a very nice thing that you have to do So the the windows images that we are offering on the cloud is is fully licensed So we in in our charging data records We are collecting the usage of of windows images and also as we are running just as a normal customer on our own cloud Also the windows images that we are producing the the usage is counted and reported and and we have to pay it on our own so it's yeah Windows or microsoft does not yet offer a perfect solution how to to do all the licensing stuff on the cloud and Recently with 2016 they also changed again metrics what is measured and then how to do it So, yeah, we are measuring and paying it. There's no easier way to to handle it at the moment just we are the Charging records that we are collecting Yeah, maybe one additional comment to that so the what we did originally all the images if you put them in the per instance price per hour the License fees that you pay to to microsoft or some of the enterprise windows. Sorry enterprise linux vendors is included We have recently opened up a model where a customer can declare. Okay. I come with a licensed Version I do my own license management and then he can declare towards us A please don't charge for that and we we supported with the same images and do some some window data magic to To kind of make that secure For for windows there are some some power shell scripts deciding if it is a bring your own license vm then register do not register at our kms and if it is Yeah, one of ours then it will be registered there So this is mainly handled by by power shell scripts and looking at the metadata It's it's windows on board Utility, oh, sorry. Um, what what tools are we using to to build the windows images? So Um, of course, it's not kiwi and it's not disk image builder It's the the onboard windows tools that we are using and the the asset the cloud based folks They have on jithub. They have some some reference implementation how to use the the windows onboard tools to to generate images and also to I think they can even create some q-cow images there So that is being orchestrated in in a meaningful fashion that at the end we also end up with some Windows images, so it's it's the the cloud based stuff for the the cloud base in it replacing cloud in it And windows tools to to make the build There was another question Okay, so the the question was why we chose those two tools and specifically What would be specific reasons that kiwi maybe has features that could be added to disk image builder So we would only need one Yeah Yeah, um Actually, we started with kiwi and the reason was when we started We had some images that came out of a different department. They had used the the nice Susie, what was there? Oh, yeah What was the tool studio to the studio to actually build that so what that Project did is it was using kiwi to build images near the nice web interface to do the configuration That was one of the starting points for this so that's Was more more historic not a technical reason to start with kiwi later on when we then did a bit more research and Kind of took took a deliberate decision We left it in place because on the suicide Actually kiwi works Better um than disk image builder does so what we would need from this image builder is a somewhat stronger support for for open susan's less In order to drop that to be very open Currently, it's not a pain for us to use those two different tools. So it's not something that That we are actively looking to get rid of both tools work fine So the question was better. Um, we are able to deploy core windows images using the cloud based tools And not just the what was the other one Yeah, yeah So at the moment we are also doing just the standard one the the full blown one not the the core one and I have to admit we are three linux guys here. So we are not the Deep window windows experts. Um, so If you are interested, we can just make contact and I wouldn't hand that over to to our windows folks All right, maybe last question if the if you're allowed to take it Links. Yes. I think we have the under the last page. Maybe we've just flipped. Thank you for that That's that's the last page For today. Um, here you find links to to our image factory on on the one hand where we have published Some documentation, but also you can find the images and the log files there Um, we also blocked on our telecom blog about what we've done there So you get an introduction you get also some deep insights what? What kind of modifications we've done to the images? So if you don't want to to read the the scripts in In the folders and we also have a help more or less a general help center on how to handle images on our cloud So, yeah I hope that helps and hope you enjoyed It and yeah, thank you. Thank you and have a great summit. Yeah