 Live from San Francisco, it's theCUBE. Covering Red Hat Summit 2016. Brought to you by Red Hat. Now here are your hosts, Stu Miniman and Brian Graceley. Welcome back to theCUBE's SiliconANGLE Media's flagship program here at Red Hat Summit 2016. Always loved when we get some of the customer stories here at Red Hat Summit this year. They've been talking about the open source stories so we're really happy to have Amadeus on the program here. Joining us, we have two of the gentlemen from Amadeus. It is Olaf Schnapauf, sorry, who is the CTO of Global Operations and we have Dietmar Fouser, who is VP of Architecture, Quality and Governance in R&D. Gentlemen, thank you so much for joining us. Thank you for having us. All right, so first of all, for those that aren't familiar, Amadeus is in the travel industry. Maybe you can give our audience a little bit of background on the company. Sure, Amadeus really is at the heart of travel. So whenever you book a flight, board a plane, look for your luggage, it's very likely that you'll be using Amadeus. It's really at the heart of the travel industry in providing the IT services to airlines, airports, hotels and many more. Yeah, actually it's funny, when you talk to most people and you say, what is IT? It's the stuff that makes stuff work behind the scenes and you guys are the stuff that makes that stuff work behind the scenes. You guys are an innovation award winner for cloud. Can you tell us a little bit about that project that led us to today? Yeah, sure, so the business pushed us to be more agnostic when it comes to the environments where we run our applications like what we call this multi-use capabilities for various reasons that I want to develop here. But it became evident that we had to run some of our very large clusters for the search and availability, stuff we are serving within the Google premises and surely on AWS and that's where, so we were looking for platforms that would be enabling this. So this was kind of two years ago, we did some evaluation and finally, containers came into the picture or OpenShift V3 as the container scheduling engine was there. We had good links with RedHut and into Google, so we thought, okay, that was a very early adoption, but it was, well, we were convinced that this was the way to go. Can you speak a little to the scale of the solution that you deployed, kind of the requirements and scale of it? So it's perhaps good to see the scale of our existing solution because this is basically the baseline of what we want to move over to the cloud platform. So, my guess is it's a very large system running out of a central data center for the time being by large. I mean, single application can grow beyond 65, 70,000 CPUs nodes, so really huge stuff. Overall, a couple of thousand physical machines, 500,000, what we call deployment units, 300 plus thousand transactions a second flowing into the system, so it's pretty good. Yeah, so you talked about running it, operationalizing it in your own data center, you talked about using Google as a public cloud, so talk a little bit, what does that from an operations perspective mean to run a hybrid cloud? A lot of you will talk about it, you're making it happen in reality, what are the challenges, what have you learned over the last couple of years? The core idea is that we separate the upper layer of the stack, which includes the application with OpenShift from Red Hat and all the container pieces from the infrastructure underneath, and that's a big change. It allows us to very flexibly deploy the workload to where we want it to be. Sometimes that's in a specific jurisdiction, sometimes that's just close to where it's consumed because a lot of our customers also use cloud platforms like Google and Amazon, and it's nice if these workloads are produced where they're being consumed, and that's the separation of the application part of the environment from the infrastructure's key, so it's real flexible and where we can produce it, sometimes on our own private cloud when data sensitivity or privacy requires that, or out there in the public cloud as well, and that makes it really flexible. Yeah, so this wasn't just driven by your own internal needs, it was driven by the marketplace, locality, geography, all those types of things as well. Yes, really, it's a mix of requirements that make it a very powerful solution, and it allows us flexibility that running on a mainframe couldn't give us. Interesting. Can you speak at all to kind of the customer impact, what did your customers, you know, see as the outcome once you've deployed the solution? Sure. So you were asking the question, so one of the concrete drivers is, for example, response time of the system, SLAs. So we work with a very large US-based hotel chain, they expect 140 milliseconds of response time for 96 percentile of the queries coming into the system. So when you look at this, quickly you realize that you don't want to travel across the Atlantic back and forth for every transaction because you lose 120 milliseconds, roughly, on the wire. So it's about a much superior user experience at the end because the systems are lightning fast if you move the computation to whether transactions are really issued. And the same holds true with Google when we push stuff into the Google environments because this is where the transactions are produced. So it's about high availability and user experience in the sense that we have a much better, faster experience. You talked about containers. We've heard about container announcements from Red Hat all week. You're running them in production. What does that mean? What's been the learning curve? How did you decide containers were the right way to move from applications to operations? Give us a sense of what's going on in the container world for you. Sure, so containers is really a change in paradigm in what is the atomic unit that we're trying to manage. And it'll be a couple of years journey from all of the legacy systems into having everything componentized into containers. It also allows us to more flexibly carve up the application services to span over multiple locations, to be managed in a much more homogeneous way than we were able to do in the past. And really to have that ability to control on a very fine granular basis components of the service without having to micromanage the infrastructure. And it's really a decoupling of the infrastructure from the service. Were you able to move both existing applications to the OpenShift platform as well as new or just new applications? No, it's a move of all of our application state, which is not finished, but it's an ongoing journey. And the goal clearly is to move all of our applications to the new platform. The question is more whether we have to touch them when we move them on the platform or not, because it's one thing throwing an application under OpenShift, so container and scheduler. And another thing is to be truly cloud-native and multidatacenter active-active. So we talk internally about re-platforming or not the applications. So it's a mix, actually. So you're pretty early on some of the containers, the Docker and Kubernetes pieces. Can you take us into those conversations you had to have with upper management or risk assessment, things like that? It's really not a question of risk or reward. It's really to make sure that we have the flexible means to serve our customers better, because it's really customer demands that require us to be agile, that require us to be flexible in where things are being produced. And not least, it's also that as funny as that might sound, light is just too slow. If light is traveling across the globe to central production locations, like I said, with the SLAs that we are under for some of our services, the trip across the Atlantic and back is prohibitive. And thus, we need to adjust to the requirements of our clients. We're seeing all sorts of changes in the transportation industry, in the travel industry, and so forth. What are some of the big things just at a technology level, whether it's around mobility, whether it's around, like you said, SLAs? What else is driving things from your customers that are driving you to make changes? Customer centricity. I mean, currently in the industry, you have many individual solutions. One is dealing in the airport, another one is dealing the reservation, and you have mobile applications. But they are not very well connected around the experience of a customer. So a system like us, we have most of the information about the journey of a traveler, because usually we know when you fly that you also check in or you have a ground transportation of any kind. So the goal currently is to make the whole experience much more user centric and to offer services that we cannot do as long as the systems stay disconnected. Makes sense. Have you been to the Red Hat Summit before? Yes. OK, can you give us kind of your take on the growth of the show, what you've been doing here, interactions you're having with your peers and the like? The growth is amazing. Last year it was in Boston, and we felt that it was a bit too small the location. So I think it was a good move to come here to Moscone, whether the big shows are happening. Well, we are watching Red Hat for quite some while. It shows that the model is flying, actually, that there is room for this type of business models, that there is a large adoption across the industries. And I hope that they're going to keep on growing, because it would kind of show that our bets were right. Yeah, we're really proud of having a partner that has so much success, that the open source movement and the community working on all of these solutions are so successful. And for us, it's a great opportunity. Yeah, how much, this show is all about community. It's all about open source. How much are you not only leveraging the Red Hat technology, but also beginning to contribute to these communities or just actively be involved with them? How is that changing your company? We are very actively involved in a lot of the projects around it. So it's not just consuming services, as Paul mentioned on stage today. It's also largely contributing the learnings and the fixes and the extensions that we need. We go pretty far, actually. I mean, we very actively contribute to OpenShift. We have engineers dedicated to this. We had Red Hat engineers for a very extended period in Europe being with our teams. It was really important to us to understand deeply the platform, because we adapt OpenShift to our existing communication systems. And that's the twist for us, because we didn't want to rewrite the lower layers of our stack underneath the application. But plug our own service management solutions into the OpenShift service registry and orchestration modules. And this is where we did a lot of cover. We also bring batch support, persistency, and onto the platform. So it's really important that big players play the game. Once you say you go open source, you have to go open source. We also encourage other companies to try to join us in this. So a company of your size and scale is at the early edge of a lot of this technology adoption. As you look out at the various things that you're using, where do you have asked for not just the vendors, the ecosystem in general, what things would you like to see from the ecosystem to make your jobs and your services to your customers better? So we really have a lot of work going on right now in making sure we have a global persistency layer, because we're really moving to multi-location, multi-cloud. And there's a lot of work still to be done to define the data models that scale globally, to make sure that we have data distributed properly and integrate all of that with the upper layers of the stack and specifically with OpenShift. So I think there's a lot of good innovation still to be made on creating also the data models that span globally. Yeah, the data plan of the system is surely where a lot of focus coming in. Monitoring would be another one. Actually, there are quite a lot where I believe we can join our forces and come up with stuff that where we don't see any competitive advantage for what we are doing. I mean, we are not positioning ourselves as a platform vendor in IT, you know? I want to give you both the last word. As you're talking to your peers, what kind of advice do you have them for kind of the technology space in what they're adopting? Perhaps I start with this one. My usual advice is be very, do conscious choices. And once you have done them, go down the path. Stay focused on this, invest in the technology, don't take especially new stuff like OpenShift or OpenStack. Don't make the error to believe that it's completely industrial-ready off the shelf and you just take it, hit an install button and off it goes. You have to be ready to invest in the people, to train people, to have good people. And then you will see that the people love it, contributing to Open Source is a super recruitment tool. It brings you additional top-notch engineers and in-house because they really love it. I mean, working on a Google code base is something with which you can attract very, very good people. And it's also to really have a vision, to inspire people, to move to an end goal, to think forward and to be brave. It seems like a big mountain ahead of you, but if you don't start the journey, you'll never get there. Many people are stuck on legacy systems and feel they could never go away and I think we're a good example that you can indeed go away and get to more flexible and more interesting platforms. Deep Marnoloff, I really appreciate you sharing the story of Amadeus. We'll be back with a wrap-up here from day two at Red Hat Summit 2016. You're watching theCUBE.