 Well, hello and welcome again to another OpenShift Commons briefing. We're fresh off the OpenShift Commons gathering in Berlin, where I had the great pleasure of finally meeting John Terpstra, today's speaker in person. And we're going to be kind of renamed this topic from unraveling to wrapping cloud-native surfaces. And I'm going to let John introduce himself and talk. This is a higher level conversation than usual. But I think you'll really find it interesting to get the Dell EMC perspective on cloud and cloud-native services. So I'm going to let you take it away, John. And if you have questions, ask them in the chat and they'll be Q&A at the end. So go ahead, John. Hey, Diane, thank you very much. And it was really a pleasure and a privilege to meet you in Berlin last week. So I'm the director of cloud architecture and software engineering at Dell EMC. We have a couple of teams, one that works on the Hadoop big data and analytics area and the other that works on the open source open stack platform environment. I'm part of that team. I manage the quality assurance team. And I have a software engineering group who work on all of the open stack add-ons to the open source add-ons to the open stack platform. And the open stack platform that we do most of our work on, of course, is the Red Hat open stack platform. And we layer open shift on top of the Red Hat open stack platform. Now, my objective today is to talk about the importance of getting the right message across to management wherever you are doing work on open shift. And you want to get a commitment to your open shift and containerization initiatives. There's a lot of confusion in the enterprise space regarding what this containers are or what containers are all about, what virtual machines are all about, how are we already doing enough, what's it all mean. So getting the message across of what it is that we are really trying to do is crucial. So I want to just cover some groundwork there. An elevator opportunity is an unexpected, unplanned engagement or contact that you have with someone who is in a position that's rather important to you, like someone who might approve a budget or someone who might approve a purchase. And we, as professionals in the IT industry, really need to be ready for those moments so that the exciting projects that we are working on, like the deployment of open shift containerized services, that we are ready to present a really succinct case and that case should be a case for digital transformation because digital transformation is important to the future of the business. We need to be able to garner interest in that very brief encounter. Typically these encounters last for between 10 and 30 seconds, so we need to be careful with how we use that time. And our objective should be to seek commitment to the key things that we are trying to do for the business. We are there as IT professionals to help drive the business forward. So to set the change for digital trends and changes that the company is going through, we have an opportunity to point out that technology is key in its impact on the business because of the emergence of the cloud, the emergence of the internet of things, and in particular the necessity for analytics as a key driver of the business. That analytics framework is a framework that we are helping to implement and helping to deliver. Digital transformation is necessary to take monolithic legacy traditional type applications and drive them forward and we'll cover why in this brief discussion. And innovation is disrupting the way that we used to do things. Additionally, consumers are increasingly using social media, social media and social connectivity is shortening the decision cycles and spreading the news a lot faster of the opportunities that are available, the things that people are interested in. So what that means is that from the moment we find out about something to the moment we buy is being that timeframe is being shortened. Additionally, there are changes taking place in the employment marketplace with more people working from home and a much greater awareness of the necessity to establish an appropriate work life balance. And on top of that, we are seeing a tremendous agility in the workplace and therefore we need to understand what some of the key software application engineering challenges are. Something happened between 2005 and 2010 in that the web and internet technology and the principles associated with them have collided with the enterprise technology that was before us. What we see now is the commoditization of software and the commoditization of the infrastructure systems that we use to deploy that software. Processes, memory, storage are all faster and cheaper. Software is getting cheaper and more disposable and more applications are being used and developed than ever before. The cloud gives us the opportunity for dynamic scale out software as a service and all of its variations are here. And this directly impacts the development and deployment cycle with far more choices in technology and far more architectural choices that need to be made. Doing software the old way with monolithic applications is simply unsustainable. Large monolithic systems have a long development cycle, a long planning cycle, a long development cycle, massive QA requirement, a lot of feedback, a lot of people involved in the chain. It just costs too much and it takes too long. And with today's dynamic demand cycles, often the software development cycle for the monolithic technologies is so long that we'll just simply miss the opportunity in the marketplace. And what we want to do is to help our businesses to capture short wave form demand cycles that can help propel our business forward. So therefore we need to change the way we do software development and software design and implementation and deployment. So the development process is changing. The silos are disappearing. We're rebalancing the whole development and deployment ecosystem, making use of cloud technologies, making use of containerization as a development and deployment paradigm in order to get stuff out there fast. Because the faster we can get the applications out there, the faster we can, our companies that are investing in this technology, or at least we hope they will, can start to gather return on that investment. Additionally, our infrastructure is a given today. We don't want to be bugged down by infrastructure. It just has to be there and it just has to happen. And our applications need to be capable of being deployed anywhere at any time. Additionally, the hardware lifecycle utility life cycles are shrinking, and therefore we need to get hardware deployed quickly. We need to get our software deployed quickly so that we can get the return on that hardware investment. So the key to balancing out the demand to opportunity spectrum is through microservice design, which focuses on replaceable components where any component can be replaced at any time, and we need very quick releases. So speed is the essence. We need to cut cost and be much faster at getting our applications out. So what I want to do is just focus for a few moments on the different types of deployment mechanisms. The traditional compute technology that came out of the 1950s and 1960s used bare metal deployment, and that bare metal was in big iron technologies. But with the 1980s, we saw the emergence of personal computers, which made it far more cost efficient and cost effective to deploy applications on cheaper systems. But even then, we ran into barriers. So we saw in the 1990s to 2000s the deployment of virtualization technologies, and now in the last few years we're seeing containerization. So how do these different methods of deployment impact the way that our software works? Well, for starters, our CPU and memory usage for bare metal deployment was limited by what the workload could consume. With virtual machine deployment, we could get far better utilization of processor and memory capabilities by simply launching more machines or virtual machines on the same system. Well, today we can get even better utilization through the use of containers because containers do not have the same resource overheads that virtual machines have. And there's another method, and that's what we're doing with OpenShift on OpenStack. That's deploying containers on virtual machines, which allows us to balance out some of the other factors that we want to look after. But on top of that, let's look at execution overhead. Bare metal has no execution overhead. Virtual machines can have a significant execution overhead because of all of the other infrastructure that's necessary to support the virtual machines. Containers are more efficient. They have a far lower overhead. However, when we start to deploy containers on virtual machines, we have to be cognizant of the fact that the virtual machine overhead persists. We can consider the impact of unit outage if a bare metal machine goes down, that machine is gone. The same happens if that bare metal machine happens to be carrying a whole lot of virtual machines, all its virtual machines disappear. But if just a virtual machine goes down, then one of the workloads that's running on the system is impacted. Likewise with containers on bare metal. However, when we lose a virtual machine that's running a whole bunch of pods, then we lose the whole shebang. Again, we need to be cognizant of what our risk of unit outage is. So from an IO perspective, bare metal, of course, is limited by whatever the devices are. However, the total input-output bandwidth of the machines are shared across the virtual machine, of a physical machine, are shared by the virtual machines. With containers, we allocate the resource across the pods that we are running on the system, and so forth. From compute efficiency considerations, bare metal, of course, has the highest compute efficiency if the workload can consume the total computer resource that's on the system. Virtual machines, however, limit the amount of compute efficiency that can be consumed by a virtual machine to the new architecture and the number of cores that are allocated to that virtual machine. And in the case of containers, we tend to be thread limited. From security risk, those are things that you want to consider as well in deploying applications. You want to make sure that your applications are in fact secure, and there are differences in the security risk depending upon which mode of deployment is used. And of course, you can see the tendency and quality of service comparison there as well. Comparing bare metal with virtualization, there are obvious benefits in virtualization. Capital costs get distributed across multiple or get consumed by multiple workloads, so there is a greater efficiency in the use of the hardware. Also, virtualization minimizes downtime because virtual machines can be moved from hosts, across hosts without loss of currency of service. And virtual machines also increase productivity, efficiency, agility, and responsiveness. Virtual machines are faster to provision than a bare metal machine, and therefore they've also provide better business continuity. There are disadvantages to virtualization. One of the most common ones listed, of course, is the cost of the hypervisor or the hypervisor tax. There are performance limits as we saw in the previous slide. And of course, the infrastructure for virtualization tends to be more costly because for virtualization support, we tend to invest far more in the microprocessor and memory so that we have more resource to share with virtual machines. There are compatibility considerations in deploying virtual machines, particularly when it comes to bringing forward old traditional monolithic applications. There are potentially retraining costs, and there is a complexity of deployment that often is overlooked, and we see a lot of that in service fraud across data centers and across organizations. And today's objective is to cut the cost of managing the IT infrastructure and the IT services provided by and within the firm, and therefore cutting service fraud is one key area of interest in many organizations. And the last factor to consider is that of the impact of system failures. Now, so far as resource layers are concerned, you're all aware of the fact that virtual machines and containers are simply different ways of sharing the underlying infrastructure of the system. Now, on the left, we see a full virtualization system where a hypervisor is the bare metal operating system, and then guest machines and guest applications are installed on that hypervisor. On the right hand side, we see the elimination of the hypervisor, but the replacement of the hypervisor with a container management system, and we can launch multiple containers on the system without the overhead of the resource allocation per virtual machine. When we consider now the bare metal containers versus containers in virtual machines, we now get to a condition where we would run a hypervisor and run virtual machines and then run a container manager within the virtual machine, within the guest OS container, and then run multiple applications within that. So containers versus virtual machines have benefits, containers spin up very much more rapidly. I'll show you some statistics on that in just a few moments. Containers maintain service integrity of the application. That's a decided advantage. And container footprints, storage footprints tend to be much smaller than virtual machine footprints. Containers tend to be immutable due to the layered or the layering, and only the upper layer is able to be changed. But then again, we don't want to change containers, we want to build containerized services as disposable services. Hardware dependencies are contained so that GPU architecture dependencies are automatically resolved as the containers get the layered containers get deployed. Now, so far as launch time is concerned, and it's always important to compare launch time on the actual hardware that you'll be using the actual infrastructure that you'll be using. But our virtual machines can take 30 to 50 seconds to deploy and take time to stop. Launching VMs on OpenStack because of the dynamic nature and the increased number of machines that are being managed from a central source can take a little longer to kick off. Containers on the other hand launch very rapidly. So that that gives us a very dynamic capability to scale with demand. Containers eliminate the OS duplication of the virtual machine. They require less memory to operate and fewer CPU cycles and typically allow us to deploy two to three times more applications on a single system. So there are significant resource utilization benefits to containerization. They provide a consistent portable development environment because you can even build your development environment on a laptop and then later migrate the containerized environment to your corporate test dev and deployment environment. Because of the faster startup and shutdown, they're more orchestration friendly and also containers provide separation of namespaces for processes, network, storage mounts, host IDs, memory and everything else. On the other hand, containers have their own liabilities. Any intrusion upon the host OS that is hosting containers can potentially result in the whole shebang being brought down. There's also a risk of trojan horses with many published containers being published in an uncontrolled environment. And of course the last one to watch out for is that excessive container packing builds containers that function more like virtual machines. OpenShift is a tremendous platform for providing containerized services or containers as a service and platform as a service because it enforces single responsibility for each of the services that are offered through the catalog. Isolation of APIs, resilience of the modular architecture and provides launch location independence, stateless modules and a very low deployment overhead. Above all, the OpenShift platform is very automation friendly for continuous integration, continuous deployment. So we have a digital transformation opportunity in that 71% of customers agree that if they do not embrace IT transformation, their business or organization will no longer be competitive in the marketplace that they serve. The business agenda today is technology because technology provides competitive advantage and the business should provide an incremental IT budget to sustain that competitive advantage. Software DNA is critical to the future of the business. There are four Rs in the dynamics of change, retire, retain, repurpose and re-engineer. We should retire high liability code as soon as we can or retain legacy code if it is suitable for migration to new processes and to our new delivery infrastructure. And lastly, we need to pay down technical debt by repurposing applications that we have. But the best opportunity we have of all is to re-engineer from the ground up and to treat the IT infrastructure that we are leveraging as the platform for a digital production house. Native code microservices or cloud native microservices enable more flexible, higher performing agile code at a cost of some increase in complexity. That complexity can be squashed by automation however. An intelligent transformation is necessary. We need to recognize that monolithic code has a liability because a lot more time needs to be spent on integration and testing than on delivering new functionality. Microservice code, also known as cloud native code, allows us to spend less time in development and gain faster deployment of new capability. We should always be mindful of the minimum viable principle, make changes small, release often, release early and get prompt feedback. We must continuously use versioned APIs to permit modular expansion of our functionality and to isolate service dependencies. And we should also make use of automated detection to automatically detect execution back pressures so that we can gracefully recover from slow failing or failed services. And there's a tremendous opportunity for us to use machine learning to debug our environment to identify what is happening in our application logs as a means of identifying those areas where we may improve the technology that we are building. So remember now, OpenShift allows us to create faster dev-to-deployment cycles, which we should use to competitive advantage. New applications and smart devices are transforming the business. Agile development with continuous delivery accelerates our leverage to market and data analytics provides the wherewithal to provide new insights to the areas where we might improve and how we might in fact get better traction of our applications in the marketplace. And thank you for listening. Awesome. There's an echo there. Maybe it's from your speakers, but I'm not sure. This was, I'm going to make you do a keynote at one of the upcoming gatherings because this is a wonderful way to really intro the whole aspects of digital transformation and some of the benefits. I actually really appreciated seeing so much talk around the bare metal deployments as well, because I think that's one thing that I'm actually seeing besides people deploying locally and using OpenShift and the MiniShift. I put in the chat the link to MiniShift, which is our fork of MiniCube for deploying OpenShift locally. But we're also seeing and MiniShift shared the link to deploying OpenShift bare metal. And I think that's actually something that people are coming around to. I mean, we've got lots of people who deploy on AWS on their own in-house hardware that's got OpenStack on it. But I think there's an uptick in people wanting that, the efficiencies of bare metal and starting to use it a lot more. Are you seeing that as well? We are getting requests for bare metal containerization. Right now, most of the requests we are seeing is, can you arm us with more information? Can you tell us what this is all about? That's the thin edge of the wedge, isn't it? That's just the beginnings of an exciting time ahead. Yeah, I think that the interesting thing is, those of us who are deep in the throngs of the technology, often forget that there's a lot of people out there who are just coming new to containers who have adopted VMs and are using them widely. But they still need to get the intro bits and the high-level understanding of what the benefits are. And meanwhile, we're trying to drive service catalogs, SIGs in Kubernetes and all kinds of new features. And they're just like, whoa, whoa, whoa, whoa, wait a minute. We need to be able to figure out, like you said, the elevator pitch just to get this stuff internalized and socialized inside of enterprises. And I think you've done a very nice job of that. So I look forward to doing this again and maybe getting Manisha to do a future demonstration of some of the work that you guys have been doing inside of Dell in the past. Jed Naitland had done a great job creating a reference architecture for OpenShift on OpenStack. And I know there's a set of internal documents somewhere at Dell still with all the details of doing that. So there's already been some great work done and I'm looking forward to doing a lot more. So there's a bunch of folks who are in the chat. I'm not seeing too many questions here. Because I think you did at a nice level. And if there's anyone who has a question, just pop it into chat. Manisha, if you wanted to add anything to this, you're welcome to. I'll unmute you and you can introduce yourself. But if you could, John, put up your slide with your contact information in it. Perhaps it's your last slide that you had or your very first slide with your name. And I think that's what I forgot to tell you, just put up your email address on something. I will get that out to you, Diane. Awesome. So I'm not seeing any questions. I think you've done a very great job giving people the basics of creating their own elevator pitches for becoming cloud native. And it's nice to see the retire recycle, the 5Rs, or I think it was the 5Rs you had. We often inside of OpenShift talk about three patterns, lift and shift, augment your reality. And I forget what the third one was, but there was one other. It was just total rewrites. It's always nice to hear how Dell EMC is messaging out the same sort of digital transformation calls to action. So, again, thanks everybody for joining us today. And we hope we can see you at the upcoming OpenShift Commons gathering again in person. I know John, you'll be there. That's May 1st. The day before opens. I was going to say OpenShift Summit. No, it's Red Hat Summit. There is a lot of OpenShift on the menu at that one, which is in Boston coming up. The gathering is on May 1st and Red Hat Summit is the second, third, and fourth of May. So look for that. And the following week, if you're coming to Boston is OpenStack Summit as well. So there'll be a lot of us hanging around for a couple of weeks in Boston. So lots of good stuff coming up and we look forward to more conversations. Thank you. All right, take care.