 Okay, good afternoon. Hope you're enjoying the day so far So I'm here. I don't like being behind the desk, but I don't have much of a choice I'm here to talk about it to you about OpenShift at Amadeus. So this is an adventure that started a couple of years back now for us in 2014 in fact and So I'll talk about a little bit about what we do our needs bit where we're at and I'll also talk then about one of the aspects that's being worked on that We're pushing for a bit and that is of interest to us therefore So we're a technology company. We provide technology solutions to the travel industry. So we're Helping content providers content operators content distributors All link up Maybe for instance, we take airline availability and make it available to to travel agents online travel agents and we also Focus on the traveler experience. In fact, that's central to to what we do from inspiration through search purchase During the trip finding a hotel Recovering your lost luggage after the trip things like that Yeah, not just the pleasant stuff So a few technical figures we handled last year about five hundred and sixty six million Bookings boarded seven hundred and forty seven million passengers in fact, we also Acquired a few years back a company called Navitaire and so that figure is excluding their passengers boarded as well With theirs it goes up above a billion We host a hundred and thirty airline inventories that means that basically the Inventory saying how many seats are available on a given plane and things like that at a given time are hosted in our systems and We handle about 50,000 and user queries per second and our enterprise service bus peaks now at over 500,000 queries per second that was the figure for last year So we have a couple of constraints in terms of the types of data objects that we handle we have Reservation records you may have heard of the the PNR the passenger name record There are other more modern name structured booking record and things like that I already mentioned the the inventories content provider inventories so airline inventories rail inventories things like that these are objects which of course we need to handle in Consistent manners. We don't want double bookings in a hotel For instance, you don't want to find up at the hotel finding that someone else already has your room All that kind of thing so we need fairly high consistency and we need Sort of transactional mechanisms and we need no response times because of course people don't like to wait the usual the usual thing so in 2014 we Came to realize through various reasons there was a bit of a buildup to this But we realized that we needed to go start working on a new platform for our applications Historically we had a system like a lot of folks where we have Thousands of physical servers running many many different applications serving many thousands of different services and On different channels can be where they can be more traditional Traditional or legacy. I don't know host to host and things like that and We were now in a position where well, we want still to continue growing We want to expand we need to control costs We also have We were operating mainly out of one data center in Germany So for customers around the world that means that you can't beat the feast the speed of light You have some latency that is there embedded in the physics Of your your placement, so we wanted to get closer to our customers We also had requirements to maybe install our our applications on customer premises So that meant also knowing how to remote remote operate We were also seeing a lot of disruption in the travel industry model It previously you may be seeing 10 to 1 look to book ratios on on airline And now we're looking more at things like 10,000 to 1 or I don't know 100,000 to 1 There are lots of different figures flying around but the the whole internet age and mobile and things like that has changed changed the The way things work an awful lot So this basically led us to Okay, a new platform also the realization that we would need to be Capable of working on hybrid cloud so working with public cloud providers, but also on-premise private cloud and We also realized that we wanted something which was beyond mere infrastructure as a service allocating a virtual machine Through through code is one thing but in fact for the developer experience but also the operator experience in terms of simplification and De-coupling the actions and what different people have to manage We couldn't simply just say okay We we go for infrastructure as a service and allocate VMs you get your VMs and you have to administrate your VMs as developers and Manage the software on there all that kind of thing so We saw that we wanted something above that and that brought us to the platform as a service We also knew that we wanted something that was more akin to You have a workload and you throw it at the system and it figures out where to run things all by itself depending on the resources available and how much your workload is going to consume We also There's okay, I mean there's an awful lot of problems that you have to deal with which is testing You simply can't test all the configurations It's not possible or when you do a software load for instance You know perfectly well that there are going to be different versions of different software running simultaneously and That you can't have pure atomic switches between a combination of software levels of your different components So, okay, I won't talk too much about this slide this slide I think should be known to most people now I mean containers docker kubernetes OpenShift what we developed on top is what we call Amadeus cloud services, which is our internal product our internal pass built mainly on OpenShift and Which basically its existence is there to fill the gaps so typical example OpenShift kubernetes are not yet suited to running databases and So we needed to also manage the data storage part as part of our pass and provide that to to our end users There was also When we started using OpenShift there wasn't Monitoring as part of OpenShift and so we developed a solution of our own a Lot of this we intend to see it go to the left over time and it's already started There are things that we were doing in ACS in our Amadeus cloud services Which we've got rid of because they've now been implemented in OpenShift one of the Examples that I like because I actually coded the awful work around back in the beginning was that you had to Open the ports in IP tables for your services to to be able to reach the pods on your on your OpenShift nodes and since 3.2 that is now done by OpenShift Automatically, so my code went the way of the bin and I'm very happy with that The more that happens the happier I am so Today we have several projects in the pipe. We have a hotel CRS customer reservation system Which will be serving out of the United States Soon We have in production Amadeus airline cloud availability, which is running on several regions on public cloud providers So throughout the world we have deployments in Europe in Asia in the United States and We're really to see happy to see that serving low latency Well, yeah fairly low latency. It's to the order of 200 milliseconds. I think or less requests for airline Availability and we're running this so this is a this is for four clusters on Asia Europe and United States. We actually have a lot more of them running and They're serving very happily several thousand transactions per second and that's working really well and our operations folks are very happy with that Now coming to where we want to go. There are lots of OpenShift evolutions that we're interested in One that I will focus on a little bit more is stateful sets that Clayton already mentioned this morning We're also interested in following the monitoring aspect because as I mentioned we implemented a monitoring stack of our own but we're interested in seeing where OpenShift is going and possibly moving to that We are Fairly obvious reasons because I mentioned that we've got lots of a number of clusters already worldwide Running Amadeus online cloud availability. We're interested in cluster federation to give a Single view and a single way a single point of entry for administering multiple clusters. Oh Self-hosting that's actually I'm interested in not the rest so by self-hosting. I mean that There's a line of thought where Kubernetes Can in fact run Kubernetes inside itself? And so your your masters are containers that are scheduled by the other masters and things like that So I'm just interested in that because I like the brain surgery aspect So then also we're interested in the Sophisticated scheduling aspects. So this is Rescheduling for instance when the system sees that there are things that are interfering with each other in bad ways Maybe a node is kind of limited Maybe you have a lot of fragmentation and you want to undo the fragmentation across the system One aspect that we've really very much interested in in an immediate way is the third-party Resources and aggregated servers aspects. We see quite a lot of use cases for third-party resources So that's basically the idea that you can extend open shift and Kubernetes to include an integrate functionality of your own and we for instance, so there's a Guy I work with who's been working on a redis cluster operator Type of object so that a redis cluster can be managed sort of natively inside open shift So to date we've done that using config maps But we'd actually like to use third-party resources to give a more fluid integration And we have a couple of other use cases for that and then the next level up of that is to actually have it available through a service catalog So some of you are paying attention. Maybe notice that poor Marie this morning was talking about service catalog because he's one of the Leads on that final point security always very important One of the things that we've had to deal with is encryption of secrets. So at the moment we have a Solution, but it's not it's not very it's not a well integrated solution and so we're following closely as well the the progress and the design on the Encryption at rest and other aspects of secrets that are going on at the moment and Then fine-grained network access control as well So I'll just talk a little bit more now about stateful sets And so the reason we're interested in in stateful sets in particular is because we would like to run data stores in open Shift, so I already mentioned that we were running data stores outside open shift And this is one of the things that ACS manages on the outside and Again something that I would like to get rid of why not it won't be now but it would simplify operations an awful lot if the if the If couch base for instance could be administered straight through open shift if all the actions like scaling up an open shift couch base cluster upgrading recovering a Node that's keeled over were all automated and seamless through the the open shift administration it also Would help in self-servicing we can make it available through the service catalog and also It makes it easier to adopt future technologies if we can put everything in open shift including our data stores So yeah, one of the arguments against that might be for instance. Well, what about infrastructure as a service provided data stores? I mean, I'm sure most of you have seen maybe on Amazon or on Google there are there are data stores provided out of the box On Amazon on GC Open stack also knows how to run data stores in principle and offer them as a service So well, I mean if they if they should go ahead use them and I would too I'd be interested in seeing them available through the past service catalog So that I have myself servicing that's uniform as well Maybe but that's something I need to think a bit more about and talk to people about but Definitely Yes, if the yes has it why bother doing it at another level However, not all infrastructure as a service cloud providers provide the same data stores And maybe the data store you want is not available on the cloud provider your target things So then you're you're going to want to run it yourself And then if you try and run it in the infrastructure as a service, it's more complicated You have to deal with the cloud provider specificities. You don't have a common operational abstraction. So We think it's interesting to actually do it at the past level So I've already mentioned inclusion in the service catalog You can have native mechanisms for for scale out and for update built-in The challenges there's always performance that was mentioned this morning already Though in the end if you are going to run on VMs That's probably where the main performance penalty is not the containerization So we can consider using bare metal Open-shift on bare metal and we'll pay less penalty for for running our data stores We also have to think about cross-data center replication couch base for instance relies on a full mesh of Connectivity between the couch base nodes of the different clusters in order to provide cross-data center application So in certain clouds that might be not too difficult because I don't know for instance GCE You have a flat network. You can route from any machine to any machine in any region on Google inside your project But that's not the case everywhere and Then there's also the challenge of finding the right level of abstraction for all of this So you can see that people already have been interested in this for quite a while Templates for running various data stores on Kubernetes or OpenShift have been available probably since the beginning that Kubernetes was made available However, they typically have limitations I actually have there's only single instance in front of a few of these However, I would actually expect that without stateful sets They are all necessarily single instance or at least you need some form of sharding of your data to be in place So the way we're going about this is that we're working closely with with red hat And at the moment in particular couch base This is the the project that's gone the furthest forward as far as I know We've done a first phase where we can now run rack aware couch base and It implements also scale out meaning you can add nodes to couch base on the on the fly Then we'll move on to a second phase. That's the content is still to be determined But basically, of course, we need to consider upgrades backups restores And we're interested in failure recovery of nodes being automated We're also working with other vendors to get Maria DB PostgreSQL and Oracle Into open shift now Oracle is actually one of them more interesting from my perspective Because I think there was a fair amount of resistance, but they've they've changed and they're now actually supplying themselves Dockerfiles or images so that you can run Oracle in a container I think they're doing this for 12.2 and above. It's not supported for production use But this is already very useful because you can just spin up an Oracle instance for for dev and test and throw it away And and it's very easy to use it makes Oracle very accessible to individual developers and things like that So we're seeing a great interest in in that area we've actually contributed a few changes on that and We've managed to make it work on 12.1 as well and run 12.1 inside open shift so the Relevant things the main relevant technologies here that it's stateful set obviously because we we need the ability to Bind an identity to a pod and bind it to its storage We also think third-party resources or aggregate or custom controllers of some kind are going to be relevant because in order to make the operations seamless Integrated in open shift. We're going to help at least a little bit on third-party resources One of my colleagues is going to work on that with the with the folks from Red Hat and Finally service catalog to make the this self serviceable and available to the to the end users of the platform directly So just before I conclude I just like to show briefly I have a Have a running couch base cluster That's a bit too big maybe No, sorry, that's still legible if I do that Okay, so What I have is so it's a stateful It's a stateful set and We're running so three couch base data instances here on my laptop and I'm also running The pillow fight application, which is couch bases sort of demo app that does some reads and writes all over the place and Okay, that was a bit too much So the interesting thing is down here. It's the graph of the operations going through so what that actually shows is That I think is interesting is that couch base The the pods Well, the different instances of a couch base cluster they need to be able to talk directly to each other so they need to be able to find each other and That was something that was a problem in the in the beginnings of open shifts and kubernetes because typically you would talk through a Service endpoint and there was only one for the whole service So here what we see is that the the couch base instances are indeed able to talk to each other To stay in sync to agree on the on the various aspects of who has what data and things like that And and this is part of what the pets sets gives us which is this identity through which each pod of the replica set is is accessible And it happens to work and and just for the record so this is This is just a three. Oh, sorry. Yeah, if I don't shift it across So this is a three node Well three compute node OpenShift cluster running inside virtual D on my laptop and So one info node just one master to keep things a little bit light Okay, and you can see they're working away a bit so right just to to finish off So we we're working on a on a big change in the way we run our applications We've already got some nice successes. There's still quite a way to go because you can imagine that with a 5000 services and a couple of thousand physical servers in the legacy system We've got a way to go to migrate applications all our applications on to OpenShift, but we've already got some running and And the good thing is OpenShift keep drawing with every release. We're happy to be contributing I'd have to contribute more, but that's always a matter of resources and time and things like that So that's it like to continue the adventure All right, then are there any questions for Eric while we have him here My question was he are you planning to use any helm or charts this kind of things for the packaging I plan to use any helms helms or charts I'm sorry Helm the the the package manager Actually, we started looking at that we haven't been very far with that Currently we're not we're we've we've actually built a Meta language internally because it's one of those things where when we started Helm didn't exist so we developed our own way of packaging things and Producing the images and things like that It's not on our highest prior one of our highest priorities for Shifting left in my previous diagram So no not at the moment really a few people have looked at it, but that's all You mentioned that you have a way of managing Secrets, can you tell a little bit more about that? Okay, so It's secret no So well if you Okay, the problem is that Secrets inside the master aren't themselves encrypted. They're not encrypted in in memory, and they're not encrypted on at rest either Except if your your volumes of your VM for instance that run your at CD or encrypted maybe okay But as soon as the VM is up there they're visible So one of the constraints that I was given was We want the secrets in the master to be Encrypted so there immediately you have a chicken and an egg problem Which I haven't solved as such But which is that if you want to have them encrypted in the master then when they get to the pod They have to be decrypted somehow and somehow you need a key in order to decrypt them So where does that key come from? How do you secure it? Does it itself need to be encrypted and if it's encrypted? How does it get decrypted? Okay? So there have been There've been a couple of solutions workarounds to that that have been put in place I think Kelsey Hightower did one article and It turns out that What I worked on which was more or less about the same time as he was working on it Was essentially the same kind of thing, which is that basically, okay, we have a Some kind of security module somewhere which actually has either the encrypted form of the secret or has a key that allows to be cryptid and What we have in the master is either just a reference to that or is an encrypted version that can be decrypted with a key Through the security module So the security module could be a full-blown hsm or it could be vault or something like that and so what I do is that I use the init containers feature of OpenShift 3.3 and Basically when a pod starts up, I have an init container which Examine the secrets and my secrets basically they're now a configuration that says where to get secrets from and I control the access to to those sources and I place I actually I actually substitute the secrets volume on the fly and So the the pod sees or the containers of the pod that need the secrets They only see the final version with the decrypted secrets and the config is something that I manipulate in my init container to get the secrets from the right place so it doesn't fully solve the problem of How do I prove that I am allowed these secrets or that kind of thing? So it's not a full-blown solution It's not an ideal solution if you hack the master obviously your king of the cluster and you can do whatever you like You're gonna get access to secrets anyway, but it's a it's an improvement It makes it harder a hacker needs to or cracker rather needs to go through more steps to get to your secrets And so you have more opportunity to audit that access and to raise alerts through a seam that kind of thing This is gonna sound like a trivial question. Perhaps it is but Very controversial and that is how do you how or have you implemented shared storage between pods? today in production we haven't We we don't have shared storage in I mean for instance the airline cloud availability doesn't need any shared storage. It's all State less and I mean that there's absolutely no persistence We so for the sorry for the Data storage part there it will be necessary. So The idea is that you have to use some source of You need Cinder or you need cluster FS or something like that. I have to admit I don't know that we've actually selected one Solution at this point I Think that's still an open thing, but basically we will use some form of cloud provided storage block storage for For for the shed storage for the data storage the persisted data storage Mmm. Yeah, I think we have some studies on going for a data storage solution And I think there's one of my colleagues who actually wants to try and answer that question Diane if you if you Little stand up and then Diane will see you and it will give you the mic So no, we are moving away from shared storage. So we now on the data persistency We with the no sequel we introduced local disks and flash and we even in the Oracle side We are going away as well from rack and from shared storage Precisely because from operational standpoint this complexity files significantly the model and generates a Less robust Solution so we are moving away from from shared infrastructure components now for the Large volumes of data. Yes, we are currently doing a study We are setting the solution from red hat, but also from other Providers for object star and we will build an object to store back end soon All right, maybe I misunderstood what was meant by shed storage then Well, I want to thank Eric and Victor from Amadeus for this presentation today They'll be around most of the week I think at Kubecon and afterwards in the beer area and if you hadn't noticed I was soft shooing and stretching it out because the man in in the far corner is the next speaker And he's just arrived Alexis and we'll get you set up and give us a second. So thank you very much