 I'll just start by introducing myself. My name is Chris Wright and I'm Vice President and Chief Technologist at Red Hat and I'm here to talk really about digital transformation, although I always struggle with that term, but really it's about what are the trends in the industry and what are some of the challenges that go along with being writing modern applications and developing code to run businesses today and those challenges and then the kind of the technology trends that are helping us address those challenges. So that's my goal here and just to start off with who here is an OpenShift user. That's amazing. All right. For those who aren't looking the direction I'm looking that was basically everybody. And then in terms of development, who here is working on development of OpenShift or related underlying technologies? So a significant quarter of the room. So we have a lot of developers and users here. So as a user of OpenShift, how about if we get a sense of how long you've been using OpenShift? You've been using it more than a year, more than two years, six years? All right. So a lot of people are familiar with the platform and then kind of running back two years and before it starts to thin out, which makes a lot of sense in terms of where we are in the industry. I'll just carry on. I happen to have a copy of my slides up here because I wasn't, okay, that's easy. So let's talk about a next generation application and the success of a next generation application is going to be its ability to really take advantage of a highly dynamic underlying infrastructure or really maybe better put, take advantage of programming underlying infrastructure to reflect the highly dynamic nature of today's computing environment. So you've got large scale numbers of users. You've got users coming in through different interfaces and we're trying to move quickly and as businesses adopt and create value for our customers while not getting lost in this sea of technology churn. So I think there's three things that are really happening today in the industry that are putting us in a very unique position in history. And the first is if you look at how we've been developing applications, the application architecture, the software architecture that we build our applications from historically was traditional monolithic applications. So you've got your big behemoth, it's your enterprise Java app or whatever you've been building and we're all familiar with those, we've struggled with those. There is some simplicity with a single large code base in terms of how you manage it and it's just one thing to operate. But it's definitely cumbersome in terms of scaling and making changes to it can be complex. You have brittle systems. So you saw kind of a shift in monolithic to multi-tiered applications for scaling and today the buzzword du jour is microservices. And I think the interesting thing about microservices is really the notion of doing one thing and doing it well. You hear the term bounded context or whatever but it really harkens back to the original days of Unix and it allows focus and separation of concerns. So you can really focus on just solving a single problem. And at the same time we have a process of how we've been building and delivering these applications. And historically we have a waterfall, a tried and true waterfall mechanism which is, you know, you see varying degrees of success there and even today you still see that as a big part of how we develop software even in communities. So we have communities like Kubernetes still have some releases as opposed to pure kind of every commit goes into production, CICD kind of model. And that development process has evolved from traditional waterfall into something that's trying to move much more rapidly. And it's about enabling developers to move quickly and understanding that code changes in small incremental steps are more understandable and you can actually mitigate some of the risk associated with change if you're kind of preparing yourself for constant change. And then on the sort of platform that you run these applications with on historically we have vertically integrated stacks, you've got the risk Unix world. And then we moved to something like Hosted so you still have a server environment that server environment might be x86 and Linux, but it's not, you know, it's certainly not programmatically accessible. And then today we have we have cloud and with the cloud you actually have API's that give you the automation that's necessary to create for yourself that dynamic environment that allows you to adjust and adapt to the current needs, getting closer, never been closer. And one of the things that ties all these things together is containers. Creates a lightweight footprint for running your services, your micro services. It gives you the ability to build and deliver an image as part of a pipeline. So in your DevOps process, you could have building an image and deploying the image and leveraging those API's to scale your application according to your requirements, the traffic from the underlying infrastructure that you're running your containers on. These three things come together, the software architecture, the process that we build our software, the platforms that we run our software on with containers to really give us a unique opportunity in the industry which is to move faster than we've ever been able to move before to try to build some common reusable building blocks and, you know, really change the relationship between developers and the IT operation side of the house. Should I be trying to display up here? All right. This is fun. These are the best slides ever. You can't believe the graphics I spent hours. Not true. I suck at graphics. So I think one of the things to reflect on from an industry point of view is where have we come from. We've come from a world where the operations teams and the developer teams are really just completely separate worlds. And while ultimately they may report up through the same CIO, that may not even be true. You may have lines of business owning developers that are really independent from the IT operations side of the house. And that's worked. It's gotten as to where we are now. It's allowed IT operations teams to focus. It's allowed developers to focus. But it's starting to break down. And we have unprecedented scale in terms of the number of users. We have users with expectations for consistency when they come in across a mobile platform, their laptop, if it's a retailer, maybe even in the store. So the world is really changing. And this separation is not really serving us as well as it may have in the past. So I think the point here is to be competitive, we need to find a new balance between IT operations, developers. And I think the opportunity here is to think in terms of your application, your application development process, how you deploy an application onto some infrastructure as a holistic concept. And developers actually have an opportunity to explain somewhat programmatically to the operations side of the house how this application should be deployed, how it should be scaled, what the dependencies are between the different components in the application. And this new kind of balance across IT and development is really, I think, going to serve us well. And you're seeing it already with companies doing amazing things with moving quickly and scaling rapidly to their users' needs. So I wanted to talk about some specific concerns or challenges of what we need to do to get to that kind of perfected IT ops and developer collaboration story. And one of the first challenges is we look at infrastructure. Historically, infrastructure has been there to support the applications. And today, infrastructure ends up being an inhibitor or a limiter to how we build and scale our apps. And why is that? I think in the past, an infrastructure component like a physical server, the Terakin stack of server is a long time-consuming activity. So we brought virtualization into the data center, enabled some consolidation, enabled the first steps towards creating something like programmatic interfaces to the infrastructure. But those programmatic interfaces didn't really match 100 percent some of the needs of developers. So you ended up building an infrastructure platform and still running it under capacity. So the application is ready to consume. The APIs accessible to the developer to expand how the application is deployed really are kind of an impedance mismatch. And so you end up with unused capacity. You're underutilizing your infrastructure. You're not meeting the customer requirements in terms of how you scale out. And so modern applications really need to take advantage of that underlying infrastructure and scale elastically. I mean, you hear the Pokemon Go example being kind of the perfect example for Kubernetes where within a week of utilization, the user base was well beyond their wildest dreams in terms of adoption rates. And so how do you do that if you're either racking and stacking physical gear or trying to allocate virtual machines and run them as long servers really just doesn't fit that application development process or that deployment requirements. And this again is a space where containers comes in. So containers really provide that lightweight simple environment for an application. So you think of your Hello World app running in a VM. That Hello World app is a few lines of code. There's millions of lines of code in that virtual machine supporting those few lines of code to run your Hello World app. And while personally I've spent time on technology that allows you to dedupe all the memory consumed across those different virtual machines, maybe that's not the best use of our time and we could actually really capture just the essence of the application and its direct dependencies and use that as the fundamental building block. I think that containers give us this unique opportunity to express the application requirements to the infrastructure. So we've been trying to do this for years. We've been trying to figure out how do we tell somebody how to run this thing and we're finally building the tools that we need that programmatically allow us to communicate between the developers and the IT ops to express exactly how we think this thing should be deployed, this application should be deployed. The other thing that containers do is build some level of consistency. So if you think back in time, we've been trying to figure out how to reuse code forever and initially we had object-oriented programming, this notion that you'd create some class and that class was perfectly abstracted and it would be reusable inside your application and other applications and we've had some level of success there. I've worked on projects where we found using object-oriented programming design the ability to reuse classes but usually it breaks down pretty quickly especially as you start to use it in unexpected ways in applications that weren't your primary concern. This notion of how can we build reusable building blocks, something like a container image, creates some consistency around these services within an application and whether it's something like a specific runtime, a Ruby runtime or a Java runtime where you might have some experts who are really focused on optimizing that particular component in the application stack or something like Cassandra which might be challenging to figure out how to set up and manage. We can take those experts who understand that and apply their skills to building the images and then reusing the images and especially with configuration so you've got some kind of a stateless image and ability to inject configuration externally you can adapt those reusable building blocks to your application specific requirements in a way that's really unique in time and in today's timescale. So again containers to me are really an opportunity and a container platform is an opportunity to correct, connect developers to operations teams and I like to think of OpenShift in some ways as a communication medium between developers and operations instead of living in two entirely separate worlds where you throw things over the fence and it's like good luck, hope you can run this thing. We're communicating directly and even programmatically through a container platform and I think the other piece here with containers is microservices are really to me they kind of go hand in hand together. Containers give you the opportunity to capture just an immediate service and its dependencies and the notion of microservices whether or not it's new whether it's just a re-implementation of service-oriented architecture is up for debate. We've got beers and sausage at the end of the day it's great fodder for beer talk but the notion that we're trying to build these aggregate applications out of a collection of services is really powerful and allows us to move quickly it allows us to run with independent teams at independent life cycles using different even tech underlying technologies different application stacks and this is where we're seeing the real power and the value of containers and container orchestration. Another struggle is the operations teams and the operations teams are trying to keep up with this crazed pace of development so developers have more and more tools at their fingertips and the operations teams are just being overwhelmed with how do I manage all these different apps how do I manage scaling if you imagine a single monolith it's relatively easy to start and stop the thing now you disaggregate that or decompose that application into a hundred services you now have a hundred things to start and stop and they're scaling independently of one another and if you were doing that with your old techniques you know this is this is a recipe for failure. One mechanism and the way I look at that that analogy of you once had a single application now you have a hundred one of the things that you're creating for yourself is additional complexity and that complexity is managed through consistency through standardization through api so you're starting and stopping things your primitives are relatively consistent regardless of the content of a container and these are the building blocks or the tools that helps our operations teams really work more efficiently and I think one of the things that's important to look at is where are we deploying modern applications modern applications run spanning some combination of some physical infrastructure because you probably still have a back-end database or some historical critical to your business transaction processing system that may be running you know even on a mainframe you've got a virtualized part of your infrastructure whether that's using open source virtualization platform or or something else you've got some notion of cloud which could be internal to your organization private cloud and then you've got public cloud and your applications somehow are spanning all of these things and these different runtime environments create additional complexity and concerns for for the operations team especially when they're thinking compliance and you know how to not only how do I run this but how do I make sure I'm I'm not violating some critical business requirements if you think that you can ignore all of that I'm not going to run it on physical servers I've left virtualization in the dust private clouds aren't for me I'm all in on the public cloud and we talk to customers who have that that motivation even in that context most of our customers that we're talking to still look for multiple service providers in the cloud so even if you're limiting yourself to just a public cloud footprint you're still spanning multiple different types of public clouds and from an operations point of view you're creating challenges for yourself understanding what are the you know what's what's common and and then what's unique across these different runtime environments so I would assert that that complexity of all these different runtime environments is here to stay whether it's just multi cloud multi public cloud or whether it's across the all the different footprints physical virtual private public that complexity is real and one of the things that helps you with that would manage that is standardization and having a common platform and from my perspective I say that common platform is Linux and Linux is you know tried tried and true we're familiar with it we understand it we have many applications that run on Linux and I think the enthusiasm that I see around container platforms is we've found a way to stretch Linux from a single server world that's creating some kind of an abstraction between your physical infrastructure and your application so that you could have your Dell HP IBM whatever x86 hardware underneath and we got away from that vertically integrated risk Unix world we've taken that abstraction and that concept and we're stretching it across a data center we're stretching it potentially across multiple data centers public and private and it's the you know the core abstraction is Linux the applications are running on top of Linux and they're being managed through some container application or container orchestration platform so I think it's really important to not forget that Linux is a critical part of this story so the third piece is is scaling the development the developers in your organization so the pace of development is increasing you you're potentially adding developers to to your organization and you know how do you how do you really scale effectively especially in a world where you're potentially creating multiple teams solving similar problems so as you decompose your application to a bunch of services and you've got the kind of proverbial two pizza teams there's probably going to be some overlap of functionality across these teams and ultimately if you look at what we're building we're building applications maybe in a different way but following some really consistent um patterns that we've that we have a lot of experience with so there's services in your application the services are connected through messaging you've got some kind of maybe analytics component to your application you've got some clustering component to your application maybe some batch processing these are all things that we've we've got a lot of experience with in the industry and the developers consumption of these different portions of an application is potentially getting in the way so the developer really just wants to focus on writing code and not necessarily understanding the details of how do i set up the messaging infrastructure or how do i set up clustering i think clustering is an interesting example we've we've talked about this one before in the context of open shift and if you look and those multiple public private virtualization type of footprints for where we could run our applications something like jgroups which is a primitive that could be used in something like in finish band you know distributed key value store clustering is at the core that clustering historically required multicast and multicast may or may not be available in something like a public cloud as a developer needing to understand the differences of where multicast is available and where it's not that's going to just slow you down it's an it's an impediment to your daily activities um having an operations team that really understands these different environments and understands when you might need to do something like uh an open shift ping protocol for for jgroups as opposed to the multicast clustering solution or or just using dns registration to register component different service components that are currently available these things are real different technology choices that are important on each of the different platforms that you run on that we're trying to sweep under the under the rug so that developers aren't faced with trying to understand the low level details and can can really focus on building the application and to me this is where the platform uh open shift really plays a prominent a prominent role and it's helping abstract away the underlying implementations on these different environments from what the developer is doing which is really writing code and and pushing and deploying and so giving developers access to uh easy to consume interfaces and allowing area experts to focus on their area of responsibility i think is what is helping us all move faster and it's what helps scale your development teams so you've got a few area experts that can be using a consultative manager across many development teams yeah so i think the the summary here is we're here we're here at open shift commons um the summary to me is that the the foundation for what i would call continuous innovation is this new platform and it's a combination of linux and containers and container orchestration and uh in our world that looks like um um open shift it's kubernetes and and docker containers in the core it's exposing linux as a as a runtime environment for applications it's linux at the bottom of the stack for operations teams and it's creating this communication mechanism and this programmatic way that developers can inform the operations teams what their application deployment needs to look like how the operations teams can can kind of uh deploy help manage that that infrastructure and allowing area experts to focus on their uh ability to to really improve efficiency overall for for developers and and it ops and i think that's it i have a few other slides here but considering we're going slowly and and you can't see them uh i'll stop there thank you