 So, 100 years ago, my grandfather was born in a little town called Bhuj, very close to the India-Pakistan border. Except, 100 years ago, there was no India, there was no Pakistan, it was all part of Her Majesty's British Empire. When he was 17, my grandfather decided he wanted to become a lawyer. But the nearest law university was in a town called Kolhapur, a whopping 760 miles away. So, when he left for college, my grandfather took a camel cart to the nearest coastal town of Kandla. And yes, don't believe everything you read on Wikipedia and what your Desi friends tell you. Indians do travel on camels, not just on elephants. From Kandla, he took a steamship to the fishing hamlet of Mumbai where he went to the Victoria Terminus train station and took a train to Kolhapur. The journey of 760 miles took three days. And I know what you're saying to yourself. That's all very interesting. What does it have to do with software-defined networking, software-defined data centers, anything. And it has everything to do with it because 100 years later, my grandfather's favorite grandson. I checked, there's nobody in my family in the audience. I traveled 2,700 miles from Boston to San Francisco and I did it in six hours. And that's not all, I checked in for my flight on my watch. The industry that you and I are privileged to be a part of the technology industry absolutely is going through a period of incredible change. And the industry that I'm privileged to serve in my current role, the travel industry, is no different. Think about it. We get in a car with a stranger who takes us to another stranger's house where we spend the night and we call it not spring break in college, but we call it the sharing economy. We wear little computers on our wrist and carry them in our pockets and they're always connected, transmitting information somewhere. We call it the Internet of Things. And regardless of where you get your news, whether you watch the real news or the fake news, I'm not judging here. You know, we live in a very interesting and very dynamic geopolitical landscape. So what I'll bring to you over the next few minutes is who is Amadeus? Who are we? What do we do? How do we serve the travel industry? And then how do we use technology, including software-defined networking and data centers to serve our customers in travel and to enable millions of journeys every single day? We're actually a 30-year-old company. A lot of you may not have heard of us. We are celebrating our 30th birthday this year. The company was started in 1987 with a consortium of four European airlines. And today we are in 195 countries. So the geography buffs among you, that's almost as many countries as there actually are. For the fast-food buffs, that's more countries that McDonald's has presence in and Starbucks combined. Our team is over 14,000 employees. And last year, we earned almost 4.5 billion euro in revenue. And even known lesser fact about us is we've consistently, for the last few years, been on Forbes magazine's list of top 10 software companies in the world. So what is it that we do? Well, if you've been here, or here, or here, chances are that we've powered your journey, your booking, your actual travel, and your destination. And you're saying, wait a minute. I look for flights on kayak. I book them on Expedia or American Express. I stay at a Holiday Inn. I fly Southwest or American Airlines. What does Amadeus have to do with it? Well, all of those plus about 50 other well-known travel brands in North America are customers of ours. And not only that, if you look at it globally, we have thousands more. We have 90,000 travel agents that use our platform. We offer content from 700 airlines, over 700 airlines in the world. You can look for flights and book them on our system. We work with over half a million hotels, 200 airports, and ground handlers. And to give you a little idea of what does it actually mean, you know, we last year, we did about 600 million reservations. It comes to, you know, on some days, we did four to five million reservations. So reservation or in the industry, as it's known as a PNR, a passenger name record, is basically your record of a trip that includes flights, could include hotels, rental cars. My guess is those of you that have traveled here from out of town, you know, you had a booking and you had a booking reference. And if you look closely at that, it's very, there is at least a 50% chance that your actual trip was recorded in Amadeus. We also have a very strong airline business where we do departure control, flight management, and a whole host of other systems for the airline industry. And we do this for over 175 airlines. And to give you an idea of the scale there, we had 1.4 billion passengers, 1.4 billion, I'm sorry, passengers that boarded flights last year using our platform. And that roughly equates to about 2,600 people every minute or to put it differently in the time that you've been listening to me, about 85 Boeing 747s have been filled with passengers, who are about to leave on a flight. So think of that for a minute. Think of what that says to the mission criticality and the scale of where we operate. So how do we do it? Well, first and foremost, we do it by a very strong investment in engineering. We are the leading technology provider for the travel sector. We have 20 different R&D locations, including the one that I work in just north of Boston. And we spend a lot, we spend a lot on engineering, on building new projects, building new products, entering new markets, serving new customers, and continuing to get better at what we do. It's probably helpful to take a quick trip down history lane to see how we got started. And when the company got started, no surprise, like a lot of other companies at that time, you know, we built a lot of our core systems on mainframes. And what that meant is we had these modules in the back and we essentially had terminals that we shipped to travel agents that used our systems. And there was basically no intelligence in the terminal. It was just a direct connection back to the mainframe. And that kind of worked well. You know, a lot of companies still use mainframes today until the PC came along. But really, all that changed is we ended up putting desktop clients. So there was a better interface for our users. In some cases, users still preferred and were proficient in the old system. Then we just put a terminal emulator. And then the web came along. And that's actually a screenshot of a Yahoo travel page back in the day. And when these changes started happening, we obviously had to keep evolving. We built APIs. We started working with the very early browsers of the day. Then, of course, you heard from our friends at AT&T this morning and 10 years ago, as Mr. Jobs famously said, this changes everything. And it did change everything. You heard how much more traffic the carriers have had to deal with in the last 10 years. For us, what it also meant is we saw a lot more B2B transactions. A lot more websites came up offering travel options. And they were not connecting to us like before with human beings sitting on terminals because obviously there's a limit to how fast somebody can type and the number of transactions we get. But we basically had a whole B2B sector that we had to serve that were automating the transactions coming to us and dramatically increasing the volume of what we did. So our architecture about this time about five to six years ago looked something like this. Very standard, what I would call SOA architecture. We had a services integrator that essentially was a front-end and messaging broker. We served a variety of channels. And then we had a lot of back-ends that did very specialized functions for the travel industry. You see their names out there all supported by different databases. I mean, all in all, we're talking about 5,000 microservices. We didn't call them microservices necessarily back then. We bought 1,500 databases. And we were doing a peak of about 80,000 transactions per second. It's the kind of volumes that we are working with. And to power all of this, we had world-class infrastructure. That's a picture of our data center in the town of Erding in Germany, right outside Munich Airport. And it's a data center that was purpose-built to serve us and our customers. Because again, 30 years ago, there were no general-purpose data centers that you could rent out that would do everything you needed. So this is one of our largest, so this is the largest data center. We have a host of other data centers all across the world, including a couple here in North America. And we are pretty used to operating again, you know, at very large scale, even on the infrastructure side. We have close to 20,000 infrastructure devices. And we change them quite a lot. Your cabling updates, your OS updates, your security patches. And in the age of before CI and CD, before continuous integration, continuous development, you know, we still made about 600 changes, application software changes every month. So think about that again. Translates to about 20 to 25 changes every single day. So while we did not know or did not practice continuous deployment, as we call it today, you know, we had a high degree of change. And when you have that, you have this problem. So those of you that smile, you are obviously aware of the pet's paradigm, but we were no different. We had an application. We would bring a server to our data center. We would give it a loving name, like apy underscore 106b. And then when apy underscore 106b was not feeling well, you would call the expert doctors from production support and level two and try and get the server back to good health. So clearly that was now going to scale with the amount of change that we were doing. And we knew that we had to move, you know, from pets to cattle or in other words, treat the data center as a computer. So the first step in our journey was really to decouple what should run from where it should run. And to do that, these were our two trusted partnerships that we and tools that we worked with. You're a big user of Linux. In fact, over 90% of our servers run on Linux at this point and we turned to OpenStack to essentially help us build an infrastructure as a service layer. Now, Martin said this morning, I don't know if he's still here, but he put it very elegantly that you have to first virtualize before you change your abstractions and that is pretty much what we did. And in our case, given kind of, you know, our relative newness to this and the relative newness of this industry, you know, we worked with VMware and we had a very what I call, you know, very classic, very traditional deployment. But we did manage over a couple of years to completely virtualize our infrastructure and have a true infrastructure as a service using VMware integrated OpenStack. For the compute side, we used NSX for the networking side. And in going through that process, you know, we did realize a lot of the benefits that you all are very familiar with and that you would come to expect from something like this. So we did get much better time to market. We had a series of 200 steps which took us about three weeks to deploy a new server and we were able to get it down to basically 20 minutes. I know you see this in marketing brochures all the time. This is something we were able to do. We do it every day. The reason I'm here to talk about it is also to give you a proof point that sometimes the exciting ideas that we have from the research community, from the academic community, the projects that our friends in the vendor side or the carrier side work on, these do end up giving significant benefits to our business. One interesting benefit was also distributed logical routing and what we realized is because of the way SDN works and because of the way we configured NSX, we were essentially able to reduce the amount of east-west traffic and what that meant is, you know, we were having with the transaction volumes a big problem with heat zones in the data center just in terms of traffic volumes and that was reduced significantly in addition to obviously improving the latency. We were able to do much better job at segmentation and multi-tenancy. Again, keep in mind we serve different lines of business, different customers and different time zones. So this allowed us to do a much better job at segmenting that and implementing multi-tenancy right down at the network layer and at the compute layer. And obviously, thanks to the vast number of tools and automation that was out there, our operations team was able to do a much better job staying on top of the infrastructure. But we still had one problem and anybody that has worked in development and operations like me knows this is true and just because you have a nice virtualized infrastructure doesn't mean your dev team and your ops teams are talking together. We had a second problem and that was you. You and me and all of us on our little phones and our tablet computers and always connected thanks to the ubiquitous networks. We are always out there looking, checking to see if we can get a good deal on our next planned vacation. And that kept increasing the amount of transaction load in the volumes that we had to deal with particularly in the front end. And that wasn't all. I mean you have the range of devices and just in the last two years we've seen a whole host of other devices that have nothing to do with humans that have come on board and are interacting with us. We have hotels that are trying to use robots in as virtual concierge in the lobby. We have airlines that are trying to use AR and VR to sell their product better. And all these devices at some point create network traffic back into our data centers. To the extent that you remember that 80,000 TPS number I talked about a few minutes ago, it's more like one million transactions every second. And we routinely do I'd say 40 to 70 billion transactions on any given day at this point. And as we started dealing with this linear growth, exponential growth almost in volumes, I mean it was very clear to us that we had to get to a different model that just virtualizing our infrastructure was not going to be enough. But we had to deploy the application as a whole with all its dependencies to instances that could be managed as clusters. In other words, we had a virtualized but a very custom infrastructure and we wanted to move to a more standardized approach. Now the beauty about standardization is no secret why there's a container analogy here. But having containers meant that you could not just load them on the big shipping carrier that could be our data center but you could put them on airplanes, you could put them on trucks, they're easy to port around. And that led to the second phase in our journey and that was really moving and building our own platform as a service layer. We very cleverly call it the Amatis Cloud Services. And to do this, we have been working with Red Hat in OpenShift. It's a pretty, I would say, given the lineage of speakers we've had here so far and the avant-garde work that they are doing. You know, this is probably not that groundbreaking but it's still pretty leading edge for an enterprise running these kind of systems in production to undertake. And our approach has been to use Docker to containerize our applications and then use Kubernetes from Google for the deployment, the scaling, and the management of those containers. So very quickly, the way it works, if you're familiar with Kubernetes, this will just be a quick recap for you. But we take our application into containers, you specify a blueprint, and those containers run as pods. And in Kubernetes, you can then specify, you know, how your pods scale. You can scale up, you can scale down, you can have elastic scaling. You can obviously run multiple pods on a set of given infrastructure and clusters. And then the beauty is, you know, the self-healing part of it. So if one particular cluster flames out, the pod essentially gets instantiated somewhere else and life goes on. Now, how did we do the networking part of this? And there's a couple of options. Kubernetes provides native APIs that allow you to have visibility into the service endpoint. We took a different route. And if you go back to the thousands of microservices we have, what we have been doing, and I say doing because this is a journey we are on, we have launched parts of this into production, and every month, every quarter, we have a roadmap where we are bringing more and more applications and services into this paradigm. But basically, you take a number of pods that form a single service in our case. And what we use is we use a Kubernetes mechanism that allows each service to essentially have its own IP address and its own port. And then we, of course, have multiple services. So the way it ends up looking like each service gets its own IP, its own port, and then each node that the service is running on basically has, you know, a local proxy that maintains the topology. It maintains the changes in the IP table. And that allows the service distribution to happen very easily and happen automatically. So basically, you know, if a service is running on a certain set of nodes and for whatever reason, for elasticity or for continuity purposes, that service, the red one in this case, were to move, the redirection pretty much happens on its own. So what we are definitely doing is working with industry leaders, working with techniques that have a reasonable chance or have been proven to work in production. So I'm not going to stand here and say, we are coming up with new ideas to do this, like Martin talked earlier or Amin talked earlier. But we are definitely following a model of very close partnership and very rapid iteration to make sure that we can actually scale these, not just in production, but to work in mission critical applications like we do and at the scales that we do. So our platform as a service has been in production now for about a year and a half. And I mean, some of the things that we've obviously benefited from, one, the first one has been, it's been a tremendous benefit to our developers because we've no longer had to restrict them or no longer tied their application environment choices to the infrastructure we are going to have. So we essentially have given them the freedom to build applications in the languages, in the methodologies they are comfortable with. We've also been giving back to the open source community. I think it's a philosophy that when we started using and we started leveraging open source code, it's obviously one of the code of conduct. We are good citizens and I'm thrilled to say that there have been several key modules in the OpenShift community, in the Kubernetes community, and in several other open source communities that are not networking related, where our engineers have not just been using open source but contributing back, you know, pure reviewed code modules back into these forums. Giving or implementing a platform as a service for ourselves has also given us, you know, a level of flexibility for business continuity purposes. And this means obviously when we need to move workloads from one side to another, we're able to do that much, much easier and much faster than we could do otherwise. And finally, this is important. Orpeth was kind enough to give you a little flavor of my background and I've worked both on the vendor side, I've worked in other enterprise roles, I've worked for a carrier. And this whole concept of hybrid cloud has been around for a long time. I think what I'm really thrilled about the way Amadeus has implemented, you know, our technology roadmap and vision is the fact that we actually have implemented the hybrid cloud model. It's not a theory, it's not something, you know, that we say we will do but, you know, we do a little part of it. This is something we have in production today. It looks a little bit like this. You know, we've worked with the all three major public cloud platforms and essentially what we did is we took our infrastructure and made it a private cloud. So a lot of our critical data and infrastructure still very much resides on-prem in our data centers. But we are now giving our customers the choice to run certain parts and certain applications, you know, either in their data centers or on a cloud provider of their choosing. So by getting that platform as a service layer, you know, we are able to essentially truly abstract the applications we provide from the infrastructure that it runs on. And a good example of that, this is on our website. You can go look it up if you are interested. Something we do for the airlines is the airline availability application that we have. And we have this again in production with a bunch of airlines where we do revenue management, the inventory, kind of all the algorithms that go behind the calculation of flight availability. And we run this in our data center, but we also distribute that availability algorithms and databases on multiple cloud locations. And we are able to do this, you know, thanks to our platform as a service that we build with, again, with OpenShift and using Kubernetes so that we can peacefully coexist, you know, with multiple cloud providers for the sake of our customers and our business. So some quick closing thoughts. I'm going to share some of my observations and what's worked. Going to channel my inner Yoda here. You know, do or do not. I think a lot of times companies do get stuck into should we do this, should we not do it? Is the technology ready? Is it really going to give us the benefits? It is very easy to just be in that mode and not take the next step. And I think what we have found is having good, strong leadership and a good, strong internal vision is very important because it gives our teams the flexibility to go down an unchartered path to some extent. It gives them the freedom to work with industry leaders and to work in a model where we can partner together to solve a lot of common problems together. It is a brand new world, especially when you talk about DevOps and when you have teams that have been working in very specific, you know, siloed mode to some extent. You know, it's very hard to be in this shared responsibility and this continuous integration, continuous deployment of code model. And it is something you have to, as an organization, really encourage and nurture. And speaking of nurturing, I think it's equally important, you know, the kind of culture you have for your engineers. And we are, again, very fortunate that our engineers are a very curious bunch. They like to find out what's going on. They challenge themselves, even if nobody else is doing so. And they bring new ideas and new approaches to us constantly. And in doing so, probably the last kind of takeaway, I would say, is remember the why. You know, we're not a well-known research institution like some of you represent here. We're not a maker of equipment like others of you here are. We're very much users of the brilliant thinking you do and the innovative engineering you do in bringing solutions to us. But the reason we do it and the reason the software-defined strategy is so important is it does allow you at the risk of sounding a little marketing-like. You know, you do hear all this and which is focus on business innovation. The more our teams are able to less worry about the infrastructure and scaling up the infrastructure and doing so automatically and doing so reliably, the more we are able to actually focus on specific problems that our industry and our customer is facing. We did something at South by Southwest with a customer of ours, United Airlines, where if you're watching a movie and you are interested in a location, you can pause and you can check details about where that was filmed and you can look at travel options to get there. But what I'm proud of really even more is just something our team did last month. And it's a good example of how you can free up your teams to do more creative, innovative thinking. And it's a small team that essentially came together. I come from Boston, where the weather is slightly different from here. Let's put it that way. And we were going through some pretty rough weather days and we said there must be a way in which we can give better tools to our travel agency customers. So we worked with a startup in the Boston area and literally over a period of a few days, we were able to ideate and come up with a solution idea that is going to be good for our customers, going to be good for our business. But most importantly, it'll be good for all of you as travelers. So let's take a quick look at that and after that we'll do questions. Thank you. Thank you again. By the way, just interesting aside, that video is also made by three of our engineers. It's not made by an outside production house. Engineers can be very multi-talented. Thank you. Questions?