 from San Francisco, it's theCUBE, covering Red Hat Summit 2018, brought to you by Red Hat. Hey, welcome back, everyone. This is theCUBE live here in San Francisco at Moscone West 4, Red Hat Summit 2018. I'm John Furrier, the co-host of theCUBE, with John Truy, the co-founder of Tech Reckoning, an advisory firm in the area of open-source communities and technology. Our next guest is CUBE alumni, Deadmer Fosser, head core platforms in middleware at Amadeus. Experience Red Hatter, event goer and practitioner. Great to have you back, great to see you. Thank you. Good to be here. So why are you here? What's going on? Tell us the latest and greatest, what's going on in your world. Obviously you've been on theCUBE, you go on YouTube, there's a lot of videos on there, you go into great detail on, you've been on the Docker journey, you got Red Hat, you got some Oracle, you got a complex environment, you're managing cloud-native-like services. Tell us about it. We do so, yes. So this time I'm here mostly to feedback some experience of concrete implementations out there and in the cloud and on-premise. So Paul told me that the theme was mostly hybrid cloud deployment. So we have chosen two of our really big applications to explain how concretely this works out really when you deploy on the cloud. So you were up there on stage this morning in the keynote. I think the scale of your operation, maybe raise some eyebrows as well, you're talking about a trillion transactions. Can you talk a little bit about, talk about your multi-cloud stance and what you showed this morning? Okay, so first to frame a bit the trillion transactions, it's not traditional database transactions, it's individual data access in highly in-memory cached environments. So, but still it's a very large number and it's a significant challenge to produce these systems. So we talk here about like a more than a hundred thousand core deployment of these applications. So response time matters extremely in this time because at the end what we are talking here about is the backend that powers large B2C sites like Kayaks or meta-search engines, online travel agencies and so it has just to respond in a very fast way which pushed us to deploy the solutions very close to where the transactions are really originating to avoid our historical data centers in Germany. So we just want to take out the back and forth travel under the Atlantic basically to create a better end user experience at the end. So you got to drive performance big time. We are very much, it's either performance or high availability or both actually. This is a true hybrid cloud, right? You're on-prem, you're in AWS and you're in Google Cloud. So can you talk a little bit about that? All powered by OpenShift. It's, OpenShift is the common denominator of these solutions. So we, some of our core design goals is to build the applications in a platform agnostic way. So an application shall not know what's its deployment topology, what's the underlying infrastructure, which is why I believe that platforms like OpenShift and Kubernetes underneath are so important because they take over the role of a traditional operating system, but at a larger scale either in big cloud deployments or on-premise, but the span of operations that you get with these environments is just like an OS, but on a bigger scale. I'm not surprised that people talked about this like a data center operating system for a while. And so we use it this way. So OpenShift is clearly the masterpiece, I would say, of the deployment. That's the key though. I think that thinking about as an operating system or an operating environment is the, kind of the architectural mindset that you have to be in. Because you got to look at these resources and the connections, link them together, you got all these same systems concepts. So you got to be a systems person kind of design. How does someone get there that may or may not have traditional systems experience like us early generation systems folks have gone through because you have DevOps automating away things. You have more of an SRE model that Google's talking about. You're talking about large scales, not a data center anymore. It's an operating environment. What's the, how do people get there? What's your recommendation? How do I learn more and what do I do to deploy architecturally? That's a key question, I think. So I think there were two sections to your question, how to get there. So I think at Amadeus we are pretty good at catching early big trends in the industry. We are very close to large engineering houses like Google and Facebook and others or Rattat of course. And so it was pretty quickly clear to us at least to a small amount of these decision makers that the combination of Rattat and Google was kind of a game-changing event, which is why we went there. So it's, I mean. And containers have been important for you guys. Containers were coming along. So when this happened, Docker became big and our development teams, they wanted to do containers. I mean, it was not something that the management had to push for. It was a grassroots type of adoption here. So different pieces fell together that gave us some form of certainty or belief that these platforms would be around for the decade to come. Developers love Kubernetes. I mean containers because it's like a dish to water. It's just natural. Talk about Kubernetes now. OpenShift made a bet with Kubernetes. Obviously a few years ago, people were like, what is that about now? It's obvious why. How are you looking at the Kubernetes trend? Obviously creates a de facto capability. You can wrap services around this. There's notions of service meshes coming. Istio is the hottest product in the Linux foundation. CNCF, Kubeflow is right behind it. I mean, these are kind of thinking about services, microservices and workload management. How do you view that? What's your opinion on that direction? So I'm afraid there is no simple answer to this because if you start new solutions from scratch, going directly to Kubernetes, OpenShift is a natural way. Now the big thing in large corporations is that we all have legacy applications, whatever we call legacy applications. In our case, these are pretty large C++ environments that are relatively modern, but that are not strictly microservice based. So they are a bit fatter. They have an enterprise service post on top of this. And so it's not, and we have very awkward old network protocols. So going straight to the mesh for these applications and microservices is not a possibility because there is a significant re-engineering needed in our own applications before we believe it makes sense to throw them onto a container platform. We could stick all of this in a container, but you have to wonder whether you get the benefit you really want to. And the benefit comes. It's a time ROI, return on investment on the engineering retrofitting at service mesh. I mean, the interesting thing is, Kubernetes or not, we would have touched these applications anyway to cut them into more manageable pieces. We call this componentization. Other people might call this microservice-ification or however we want to call this. So that's, to me, this is a work that is independent from the cloud strategy and et cetera. So some of our applications to move faster, we have decided to put them more or less as they are on OpenShift. Others, we take some more time and to say, okay, let's do the engineering homework first so that we reap the full benefits of these platform. And the benefits really is what is fundamental for developers efficiency and agility is that you have relatively small independent, load sets so that you can quickly load small pieces. You can roll them in. Time to production. Time to product to production. But also quality. I mean, the less isolated or the more you isolate the changes, the less you run the risk that it changes is cross impacting things that are in the same delivery basically. So it's a lot about smaller chunks of software that are managed. And for this, obviously a microservice platform is absolutely ideal. So it helps us to push the spirit of the company in this direction. No more monolithic applications, fast daily loads. Morale's higher, people happy. Well, it's a long journey. So some are happy, some are impatient like me to move faster. Some are still a bit reluctant. It's normal in large organizations. Talk about the scale. I'm really interested in your reaction and experience. Talk about the scale. I think that's a big story. As cloud enables more horizontally scalable applications, the operating aperture is bigger. It's not like managing systems here. It's a little bit bigger picture. How are you guys looking at the operational framework of that because now you're essentially a site reliable engineering role. That's with Google talks and SRE. But now it's an operating, you're operating but you're still developing code and you're writing applications. Absolutely. So talk about that dynamic and how you see that playing out going forward. Okay. So what we try to do is to separate the platform aspects from the application aspect. So I'm leading the platform engineering unit including platform operations. This means that we have the platform SRE role if you want so and we oversee frontline operations, 24 by 7 stability of the global system. And to me, the game is really about trying to to separate and isolate as much as we can from the applications to put it on the platform because we have like close to a hundred applications running on the platform. And if we can fix stuff on the platform for all the applications without being involved in their individual load cycles and waiting for them to integrate some features, we just move much faster. So you can decouple the application from some core platform features. Exactly. Make them highly cohesive. It sounds like an operating system to me. It is. And I come to the second part of the SRE a bit later. Currently, the big bulk of the work we are doing with OpenShift is now to bring our classical platform stuff under OpenShift. And by classical application, I mean our internal components like security, business rule engines, communication systems, but also the data management side of the house. And I think this is what we're going to witness over the next two, three years is how can we manage like in our case, couch base, Kafka, all of those things. We want them to be managed as applications under OpenShift with descriptive blueprints, descriptive configurations, which means you define the to be state of the system and you leave OpenShift to ensure that if the to be states like I need a thousand ports for a given application is violated, OpenShift will repair automatically with the system. That's interesting. You bring up a dynamic that's a trend that we're seeing. I want to get your thoughts on this and it hasn't really been kind of crystallized and yet I haven't heard a good explanation, but the trend seems to be to have many databases. In other words, we're living in a world where there's a database for everything, but not one database. So like if I got an application at the edge of the network they could have its own database. So we shouldn't have to design around a database concept that should be concept as there'll be databases everywhere, living and growing and managing it. How are you, first do you believe that? And if so, how do you architect the platform to manage potentially ubiquitous amount of different kinds of databases where the apps are kind of driving their own database role and working with a core platform? Seems to be an area people are really talking about because this is where AI shines if you get that right. So I agree with you that there are a lot of solutions out there. Sometimes a bit of a confusing choice on which type of solutions to choose. In our case, we have quite a mature what we call a technical policy and the catalog of technologies that application designers can choose from. So there are several data management stores in there. Traditional speaking, we use Oracle. So Oracle is there and it's a good solution for many use cases. And then we were very early in the NoSQL space. So we have introduced couch space for highly scalable environments, Mongo for more sophisticated objects or operations. And we try to educate or to talk with our application people not to go outside of this. We also use Redis for platform internal things. So we try to narrow the choices on. What are the glue layer, any kind of glue layer standards gluing things together? Well, in general, we always put an API layer on top of the solution. So we use our own infrastructure independence layer when we talk to the databases. So we try not to have those native bindings in the application. It's always about disentangling platform aspects from the application. So Deymar, you did talk about this architectural concept of these layers and you're protecting the application from the platform. What about underneath, right? You're running on multiple clouds. What have been the challenges? In theory, there's a separation layer there and OpenShift is underneath everything. You've got OpenStack, you've got the public clouds. Have there been some challenges operationally in making sure everything runs the same? Clear, there are multiple challenges. So to start with, the different infrastructures do not behave exactly the same. So just taking something from Google to Amazon, it works in theory, but in practically speaking, the APIs are not exactly the same. So you need to re-map the APIs. The underlying behavior is not exactly the same in general from an application design point of view and we are pretty used to this anyway because we are distributed system specialists. But the learning curve comes from the fact that you go to an infrastructure that is in itself much less reliable if you look to individual pieces of it. It works fine if you use well the availability zone concepts. And you start with a mindset that you can lose availability zones or even complete regions and take this as a granted natural event that will happen. If you're in this mindset, there aren't so many surprises. OpenShift deals very well with the unreliability of the virtual machines. We even contract in the case of Google what is called pre-emptive VMs so they get restarted anyway very frequently because they have a different value proposition. So if you can run with less reliable stuff you pay less basically. So if you can take advantage of this you have another advantage using those. Hey, Mars, great to hear your stories. Congratulations on your success and all the work you're doing. It's really cutting edge and great work. You've been many red hats. What's the revelation this year? What's the big thing that people should know about that's happening in 2018? Is it Kubernetes? What should people pay attention to from your opinion? I think we can take Kubernetes now as granted. But that's very good news for me and for Marius. It was quite a bit at the beginning but we see this now as the de facto standard. So I think people can now relax and say, okay, this is one of the piece that will be predominant for the decade to come. But usually I'm referring to IT decades only three years long, not 10 years or so. And it's moving to an operating system environment. I love that analogy. I think it's totally right from the data that we see. We're living in a cloud native world, hybrid cloud, on premise, still true private cloud as Wikibon calls it. And really it's an operating system concept architecturally. And IoT is coming fast. It does, yeah. It's just going to create more and more data. So what I believe and what we believe in general at Amadeus is that the next evolution of systems because the big architectural design approach will be to create applications that are much more streaming oriented because it allows to decouple the different computing steps much more. So rather than waiting for a transaction, you subscribe to an event. And any number of processes can subscribe to an event. The producer doesn't have to know who is consuming what. So we go streaming data-centric and massively asynchronous. Which yields smoother throughput, less hiccups because in transactional systems you always have something that slows down temporarily a little bit. It's very difficult to architect systems with absolute full separation of concerns in mind. So sometimes a slowdown of a disk might trigger impacts to other systems. With a streaming and asynchronous approach, the systems tend to be much more stable with higher throughput and simpler. And more scalable. More scalable. There's a horizontally scalable nature of the cloud. Absolutely. You've got to have the streaming and this architecture in place. This is a fundamental mistake we see with people out there. They don't think like this, but then when they hit scale points they have to re-breaks. Absolutely. And so I mean we are a highly transactional shock but many of our use cases already are asynchronous. So we go a deep step further in this and we currently work on bringing Kafka massively under open shift because we're going to use Kafka to connect data center footprints for all type of data that we have to stream to the application that are out in the public cloud or on-premise basically. I should call you a professor because it's such a great segment. Thanks for sharing an awesome amount of insight on the queue. Thanks for coming on. Good to see you again. I welcome. We're also headed core platforms in the middle here. So it's a great time today. It's down and dirty getting under the hood. Really about the architectural of scale, high availability, high performance. This is the systems to be scalable with cloud and also open sources powering it. I appreciate it Red Hat. It's theCUBE bringing you all the power here in San Francisco for Red Hat Summit 2018. I'm John Furrier, John Troyer. We'll be back with more after this short break.