 Okay. Good afternoon, everyone. How's everyone doing? Everybody had your lunch? I hope I don't put you off to sleep after a good lunch. So, we're here to talk about Red Hat Consulting's Cloud Migration Solution. How to efficiently migrate workloads from legacy to Cloud. I'm Vijay Balu. I actually run the OpenStack virtualization containers practice at Red Hat Consulting. Why do you need to migrate to the Cloud? I think that's a question that probably you've been hearing all along right from the keynotes this morning to almost every session that you go to, is some of the common things that you probably would see your business users ask of traditional IT is speed, speed to actually deliver products to the end users at a reduced cost and with seamless management across your infrastructures. Like you've heard this morning at the keynote, every new product and every new shiny object that comes around, comes with its own baggage of monitoring management and all of the tooling. As IT and as IT operations, you probably have a lot of tools to actually work with. So, the goal is to see how you can efficiently manage both your mode one applications and also at the same time, enable migration of this mode one environments to the Cloud. What is the opportunity here? It is basically looking to the future and actually reducing the TCO, if you actually have any VMware folks in the room, I'm sorry, the whole solution that we put together is to actually reduce the VMware footprint and actually reduce the total cost of ownership for customers. How do we actually go about doing it? One of the primary things that you probably need is to, you need to standardize and automate the provisioning of your new infrastructures or your new workloads. You actually to maintain the stability of the platform while at the same time accelerate your innovation. So, you probably want to keep your legacy mode one applications intact at the same time working on your new next generation mode two applications. You actually want to reduce your dependency on a single vendor and actually eliminate vendor lock-in so that you actually can work with the community and the broader teams to actually innovate faster. What does Red Hat's Cloud migration solution look like? The goal that we actually bring to bear is work with customers to enable vendor dependency, eliminate vendor dependencies, migrate risk or mitigate risk. One of the primary question that you probably have is a security and compliance team come to you and say is, when you talk about moving to the Cloud, how are you making sure that you're compliant with the existing requirements and how do you make sure that it's secure? We would actually help customers to reduce the TCO by actually working with open source solutions and working with open community. The goal is to accelerate provisioning. You've probably been hearing all day long that you have customers who still take 40 to 50 days to actually provision a single VM or a single server. So, the goal is to actually accelerate provisioning so that we actually let much more agile in standing up new infrastructures or actually developing or deploying existing infrastructures. You want to be able to deliver services faster. In the constant world of changing requirements and changing applications and new business processes, you actually want to be able to build services faster and make them available to your end users. You also want to streamline your processes, remove the ad hoc one-offs. You want to actually make sure that there's less manual overhead when you're actually building a process around migrating workloads. At the end, you actually migrate the workloads from legacy to Cloud. So, if you look at it, the actual migration is probably a tenth of the whole process. How do you actually effectively migrate workloads? You probably start off with a proper discovery session. We actually want to review what your current state is and what your workloads look like. You then want to actually design your minimum viable solution to actually migrate those workloads, and then implement in large scale based on the size of the environment. Migration success, it's more than just moving a VM. You can probably move a VM from traditional virtualization environments like VMware and move to OpenStack, but that actually won't give you everything you need. You're basically just moving what is available in the source and moving to the target. You need to actually understand what your current state requirements are and what your future state is and iteratively work towards getting to your future state. You want to actually map your applications to the migration and actually build migration patterns. You probably want to classify your workloads as, if the workloads are actually mapped to what we call as Cloud Washing, which is basically taking what is an existing workload and moving it to a cheaper virtualization platform without actually changing how it works. You actually want it to be cloud optimized. Basically, you want to actually figure out what workloads can actually go to a more scale of virtualization platform like OpenStack, or stay with more traditional virtualization platforms, or migrate to cloud native platforms, containers, in this case, it'll probably be Red Hat's OpenShift platform. You need a platform that actually can orchestrate the migration. When you actually migrate to VM, your application is not running on a single VM. It is actually running on a bunch of VMs, bunch of environments, and your application is made up of maybe 15, 20 or 100 VMs together. That actually goes with your firewall rules, your CMDB integration, everything else. So you need to actually understand how to classify those workloads, bring them together and actually migrate them holistically because nine out of 10 customers we talk to don't actually have that mapping available. It's sitting in some developer's head, who actually no longer works for the company, or it was a subcontractor actually came and deployed the application and no longer works. So how do you actually migrate applications together? You need to also be able to design your infrastructure. There actually is capable of migrating that at scale. You can probably move workloads one VM at a time manually, but you actually want to migrate hundreds of VMs at the same time. You need a process that actually can manage and that can scale out. You also want to manage and enable your people and processors to effectively manage this change. As a migrating workloads, there's a lot of people that are going to be impacted by it. It'll be your application teams that'll actually be working with new platforms. It'll be the operations teams that'll actually be managing the new platforms, and the developers are actually building the orchestration of workloads going from one virtualization platform to the other. So we actually want to build a team that actually can work with the migration tooling. So our Cloud Migration Philosophy is three prong. We actually focus on people, process, and technology. With people, what it means is we don't want IT to actually go and migrate workloads in silo without the end users being involved. So we actually want to drive collaboration across the organization. We bring in the right people to bring in the change within the organization and mentor the resources. It's iteratively to actually develop internal capabilities. We engage in building a repeatable migration factory approach, so that you actually build once and reuse the migration factory to actually migrate workloads over and over. We establish a continuous improvement atmosphere to actually make sure that you actually can evolve these workloads and these migration processes as you actually learn more. The problem that you run into is you can probably migrate 80 percent of workloads without too many changes, but it's the last 20 percent that actually takes you the most time. So you probably want to have a process that can be iterative, and you actually can add changes as you actually learn about your 20 percent that actually doesn't follow the norm. The technology, we actually have unique migration tooling and templates to actually help organize automated and orchestrated workloads from your legacy virtualization platforms to the Cloud. We actually leverage open-source solutions to maximize cost-servings and eliminate lock-in. How do you actually make sure that you actually can bring the people together and collaborate? The other approach that Red Hat takes is that we actually build a migration PMO where we actually work with our internal Red Hat stakeholders and the customer stakeholders and actually build a combined team to actually manage and mitigate risk, secure the commitment of the different teams. So this is probably not where security team comes at the very end and says, hey, I was not involved in migration, what did you do? Want to bring all the teams together ahead of time to make sure the migrations are successful. We at Red Hat actually have developed expertise in this migration tooling to make sure that we actually can help customers migrate massive workloads. What makes up the people? If you look at this is probably almost every enterprise, we'll probably have a list of all of these roles. The goal is to actually bring all of these people together, be it the business owner who actually understands why migration is required and how does the business strategy align with the migration goals, an architect who actually can design the architecture for a customer and helps move the business forward. You need to bring in the operations and development teams together and as a part of this whole DevOps mindset, you want to make sure that both development operations work together in tandem as a part of the migration process. Environments that already have automated testing, you actually can drop in the testing routines into the workflow to make sure the migration is actually validated automatically. If you don't have automated tooling in place, then you actually need to streamline your testers to make sure they actually test the migration. You also bring in the security and compliance teams ahead, so that you actually always use the security first approach to make sure that whatever you're trying to migrate to is secure and compliant to the enterprise requirements. Then at the end, you also bring in the end user to make sure that the end user's user experience of this workload when they're migrating from legacy to Cloud don't change. We follow the agile and scrum methodology to actually migrate workloads. I'll actually skip this because you've probably been already bored about hearing the word agile all day long, and sure you'll probably be hearing it for the next five days. So what does the process look like? It is actually a process that has put together to mass migrate thousands of VMs. It's been tested, it's been proven. It is actually a holistic approach for both management, automation, and orchestration. It's not one, it actually comes with a full gamut end-to-end. It actually involves application analysis of the migration workloads. To understand what's in the workloads, how do you plan the migration and what the target is. The migration factory is a standardized set of tooling and templates that are pre-built by a consulting team that actually can come in and help with customer migrations. So let's now go through what a process looks like. Pre-migration, before you actually want to migrate workloads, one of the most important thing is to actually understand what those workloads are. You can probably go to your CMDB and find out what's running on the workloads. You can probably go to your developer and ask him, how did I actually install it? But nothing is better than the actual running instance to tell you what's running there. So we actually have unique management solutions that actually can go into those running workloads and actually gather information about what is running on those workloads. Then we actually engage the application owners to actually classify and tag those workloads and build a validation plan. Once you build a validation plan, you actually want to create the required networking and all of the scoters that are required if it's going to be OpenStack or if it's for a build what your point of arrival is, which is going to be a target. Then we actually vMotion the VMs to a shared storage environment. Then we tag the VMs for migration. And how do we actually do it? So the automated discovery of the management tooling enables us to actually gather the characteristics based on what's running within those VMs. And once we know what's running within them, we actually can build a taxonomy. Once we build a taxonomy, we actually can then classify and group them together saying, hey, these are actually, these 10 VMs belong to application A and these 15 VMs belong to application B. And because these five VMs are running Tomcat as a web server, they need to go to OpenStack. These are running Oracle workloads. They need to go to Rev. So we actually can tag them as to what the point of arrival is for each of those workloads. And all of it is done through a UI where you actually can go and tag those VMs. It's like tagging yourself on Facebook. Once you tag it, it'll actually follow this VM for the life of it. Once you do that, you actually go on to prepare your point of arrival environment. It actually will include either building your OpenStack target platform from scratch, standing it up, or building your Red Hat virtualization platform and making sure that that's actually made available to the management tooling. So the management tooling knows about the existence. You actually have to build a shared storage environment as a holding area where you actually can then migrate workloads from one virtualization to the OpenStack platform. Migration day, we don't want to actually migrate running VMs because you never know what's running in memory. And when you're actually moving from one hypervisor or one virtualization platform to a completely different platform, I don't think there's actually any tooling that exists that actually can migrate memory. Within the same virtualization platform, you actually can move resources from one host to the other, but between platforms, the safest way is to actually shut down those VMs and actually migrate. The way we actually do this is we actually engage the application owners. Now that we actually classify those VMs and tag those VMs as to application, the application owners can actually go and say, I'm ready for migration for my application. I've created a maintenance window for my end users. Mass migrate my VMs. All you have to do is go in, select the migration group and hit submit. It'll actually migrate all those VMs together. Now what happens in that migration? We actually have what is called as a migration state machine that actually converts the workloads from traditional format to the new formats. It could be from OVA going to raw for OpenStack or QCov2 for Rev or any of the formats. Once you convert and migrate them, we actually inject all the required drivers based on the operating system that's running and make sure that it actually is prepped for the target platform. And once it's done, we actually migrate the VMs to the target platform and apply any of the post configuration steps that are required for the application to start up. We also have an automated testing hook. So we actually have testing tools in place to actually test the validity of the applications after it's migrated. You actually can inject those into those hooks and actually test the application. And once the testing is done, we either can release the applications to the end users or keep it down till the application owner comes in and checks off saying that migration is good. So, and almost all of this can be monitored in real time. If you're running 100 VMs and migrating them at once, every VM migration is tracked individually and there's rollback in place to actually make sure that if the target goes down, for some reason the migration was successful, you can always revert back to your source. Post migration, some of the common steps that you probably have at post migration is, you probably extend the parameters of your target as that you update your existing CMDB saying it's moved from source to target. You want to actually clean up as that it's your automated testing results. You want to decommission workloads running in source, you can relieve space and actually at the same time relinquish hardware and migrate to your new platform. So this way as you're migrating workloads, you can release hardware in your source environment and actually move to the target and actually use the same hardware. And you can actually diverse your old assets. So this is how the whole cloud migration solution looks like. It includes your pre-migration steps that we discussed, the steps that actually happen on the migration day and the steps that need to happen post migration before it's released. So we spoke about the people and the process. Now what actually makes it happen? What is the technology that's sitting behind it? It's actually powered by the Red Hat Cloud Suite. It is actually hosts, the target environments that we actually work with or other kind of virtualization or OpenStack. We actually work on a roadmap to see how we can actually migrate workloads into OpenShift and containers. Almost all of the management automation orchestration is done using Red Hat Cloud Forms. So technology, we spoke about this. I'll probably talk about this in a little more detail. So what does automated discovery give you? Automated discovery enables you to actually bring in the Red Hat Cloud Forms tool set into your existing environment, completely brownfield, connected to your infrastructure and actually learn about what's running in your infrastructures. It actually allows you to understand the dependencies and actually help them catalog and classify workloads for migration. It actually reduces the risk when VM or app characteristics are actually not documented because you actually can learn them on the fly and use that as your source of truth to figure out what's running in those instances. You want to build a taxonomy to identify and group common VMs. It actually reduces errors during mass migration. One of the example is you don't want to migrate your databases without actually making sure your app servers and web servers are actually going with it. Otherwise you'll probably have a case where all your app servers and web servers don't have any connections to the database. It also also leverages data obtained through Cloud Forms. So it actually acts as a single source of truth to actually understand what's running and uses that to actually build your migration tooling. The technology of exhibiting the migration. So we actually configure the state machine templates that actually come as a part of the solution. So these are actually repeated workflows that actually been worked based on experience of different customers. They actually have error handling and rollback paths all built in to actually ensure that the migrations are managed uniformly. You want to make sure that everything that's actually done is automated and orchestrated so that human error actually is eliminated from the migration process. We define a migration window and ensure that the VM ordering and dependencies are properly managed. You want to make sure that the database server is shut down last, but it's actually brought up first. So we actually ensure that the ordering is set so that as a part of migration, we actually keep that ordering in mind. We implement a push button service catalog to perform migration. So you don't have to go to 15 different places to actually migrate those VMs. Everything is done through a single click of a button. We monitor the migrations in real time. We actually can report critical outcomes to shareholders or stakeholders, not shareholders, and validate migration to actually notify the owners. This actually, the monitoring of the migration process gives the end user the ability to understand how long is it going to take for the migration. And we actually have the built-in to tell it's going to take two hours or 15 minutes and the users can come back after that elapsed time to actually come and check the status. An example architecture of the dataflow. This example is where the source is actually vCenter. So this customer actually wanted to migrate from VMware to OpenStack. So this actually talks about the different steps that the workflow goes through to actually create this workflow. So step one, you actually see 1A where CloudForms is actually connected to the point of departure inventory, which is vCenter, collects the inventory as to what's running. And once it actually has an inventory of what's happening, it actually then enables you to actually migrate workloads to that shared migration NAS, where you're actually doing a vMotion of your VMs into that NFS store, and then we actually do an NFS export. Once we export, we actually have the V2V migration infrastructure that's at 4A, where it actually is migrating the workload and then pushing it down to NAS as a raw image. And once the image is available on that shared storage, then CloudForms works with OpenStack to actually create a center volume, drops that into it, and then you do a boot from volume, so the VM comes up. And based on the networking configuration that's either collected from the source, if you actually wanna keep the IP address in the same, then you actually wanna make sure that you shut down the primary and make sure that IP address is available when you actually move to the target. So the CloudForms state machine engine actually keeps track of that and actually manages it for you. If you actually wanna re-IP it, you probably have to inject new workflows in just to make sure that if you're re-IPing it, what happens to your firewalls, what happens to all of the other configuration for different applications that are connected to it. We actually use Puppet. For this customer, we actually use Puppet because they actually use that as a configuration management for post migration tooling. The next example I'll actually show you is almost similar, but where this customer actually wanted us to migrate from VMware to Rev. So this was actually a case where the customer wanted to move away from VMware into a low-cost virtualization platform. And since Rev has almost all the characteristics that you need for a traditional virtualization platform, it actually made sense for them to migrate. So this workflow actually talks about how to migrate workloads. So you'll actually see the difference between Rev and OpenStack is the last swim lane where the target environment was replaced and probably has less steps. Because with Rev, you don't have Cinder, you don't need to use Cinder. You just load it into the export domain and actually bring it to the data domain. This was actually done with Rev 3.3. With Rev 3.6, you can actually directly import it into the data domain. It's one less step. And Rev 3.6 also has a VM migrate function. We actually can migrate one VM at a time through the Rev Manager UI. So if you're doing one VM at a time, you can probably use the Rev Manager. But if you're looking at mass migrations, cloud migration solution from Red Hat is probably the way to go. Why Red Hat Consulting? So it's actually a global services company. We actually have the best skills in open source. We actually bring open source to enterprise. We bring the different Red Hat technologies and bring the best practices together. And we actually can deliver everything from an initial strategy to the hands and the keyboard. We bring in the SMEs for open source. Almost all the products that are actually showed in the Red Hat Cloud Suite have their origins in open source. Red Hat CloudForms, the open source community for that is ManageIQ. Red Hat Virtualization is overt. Red Hat OpenStack is OpenStack. So you'll probably see that almost all the tooling that we use is actually in open source. And Red Hat Consulting actually follows the 3Ds. We don't actually design anything before we discover what is required. We don't deploy before we design. You'll hardly see a Red Hat engagement where we're actually doing this large scale migrations or any cloud deployment. We're actually doing a discover, a design, and a deploy. Why do we need discovery? We've often seen that if you actually don't discover what is required, you don't identify the drivers, you don't identify the use cases and challenges. It is very difficult to actually plan a migration. You also want to identify the potential technologies. It's not necessary that the technology that we actually have here in our reference deployment is always the same at every customer. We actually come in with an approach. We come with a proposed solution. But we have seen that every customer is different. Customers have different requirements. They have different environments. So we actually have to tailor to those requirements and actually identify the gaps of what the target state requirements are and actually tailor the solution based on that. Once we actually have that, we probably create an action plan to address those. So we actually have developed what we call as a cloud migration smart start. So it is a complete iteration of our service delivery framework. But what it actually helps you do is helps you go from zero to a minimum viable platform in four to eight weeks. How do you actually go about deploying it? Iteration one, the goal is to actually design and actually deploy a pilot. So we actually run a discovery session, do the workload analysis and actually define the target architecture. And then the deploy phase, we actually will deploy the target environment, the migration tooling that's required and the state machines, workflows and the templates that are required to actually carry out the migration and also validate the initial set of workloads. Iteration two, we'll actually continue on in the workload analysis, ensure that the first workload group is actually ready and we actually have a pilot go live. And iteration three is to actually enhance the migration tooling based on the changes and nuances that we see at customer environments and tailor them to actually work. So what does it look like? The sprints that we actually mentioned here, each sprint is two weeks long. So you actually see here, this is actually a reference implementation where we actually say that for discovery and design, it's one sprint, phase two is actually deploying the reference architecture. It is probably two sprints and then it's phase three and beyond. So you probably know, you can calculate how many weeks it is based on each sprint. Have we done this before? Yes, we have. We actually have done it at multiple customers. So there are a couple of customer success stories. This was an environment where the customer wanted to have an integrated IaaS and PaaS environment and they actually wanted to completely standardize on Red Hat. So they actually wanted to replace VMware with OpenStack and we actually use this approach to actually migrate thousands of workloads. To date, I think we've migrated 6,000 workloads at this customer and the customer is actually on their own trying to migrate to themselves now. This was a telco where they actually had a vCloud solution that was limited customization and actually slowing time and they actually wanted something that actually can understand what's running today and actually make them ready for future. So they actually use this to actually migrate VMs and provision new VMs. And this is the third one where they actually wanted CloudForms and Rev as a target platform. It's actually worked with the customer to actually use the Red Hat Cloud Suite to actually enable them to actually have more agile infrastructure. Before I end, these are some of the training courses that are actually available from Red Hat Services that you can take a look at. Because it's OpenStack Summit, all I have is a training courses on OpenStack. If you go to redhat.com slash training, you'll probably see a lot more courses on all of the products or the tooling that Red Hat has to offer, part of Red Hat Cloud Suite. And thank you. If anybody has questions, you can probably take them now. So the question was, how do you actually preserve the networking segments from vCloud Director as a part of the migration strategy? Yes, vCloud Director does manage some of the DSCP allocation, but at the end of the day, it is actually working with the virtual switch behind the scenes. So we actually work with that level. If you actually have specific networks, you actually can pre-provision those networks or make the same networks available to your target platform. And you actually can make sure that you actually have like an inbuilt IPAM within the target infrastructure and migrate with OpenStack. OpenStack comes with its own DHCP. So if you have IPs, you actually can move them. And instead of allocating IPs automatically, you actually can manually create ports and port groups and actually assign them on the fly. So you know that there's no conflict. Yes. So we have seen environments where the customers were relying on the underlying hypervisor functions and those are the use cases that we said as a part of the classification, we actually like to understand what the taxonomy is and what the use cases are and then propose what the target architectures are. For those workloads that actually rely on infrastructure availability, we would actually propose REV instead of OpenStack. OpenStack, the Redacted Implementation OpenStack comes with instance HA, which is not the same as hypervisor HA, but it does come with HA. So some of those use cases can be handled, but our target platform for those workloads that rely on hypervisor HA is REV. So, and REV gives you that. That's the report. Yes, that's a part of, and that's why doing discovery and tagging is required because it actually helps to identify those workloads and classify them separately. The migration tools that you referenced are those available to the customer or does Red Hat do all the migration for them? No, so if you actually engage Red Hat to come and do the initial scoping, so we'll actually come in with the migration tooling and once we are done, the tooling is yours to actually continue forward. It's not something that's actually available in the open source community. It's a Redacted Consulting Build solution. The tooling is all open source, but the specific workflows, we actually come in with the consulting solution. So you help the customer get started and they can continue the migration? Yes. And our approach is that we actually help you start, build the minimal viable platform and actually train you to be able to actually move forward with it. Two questions. You said you use Puppet for some of the maybe post-migration services. Yes. You planned to transition that to Ansible given that you bought them? Yes, so this customer already had Puppet and it was before the acquisition of Ansible. Okay. And the customer was using upstream Puppet. They're not using Puppet Enterprise or Satellite, so we had to use that. But yes, with the Ansible Tower integration coming into CloudForm soon, that'll actually be the next iteration of the solution. Okay. And the tooling and the workflows you spoke of, is that available to partners? It's actually, we can talk about it as to how we actually can make it available to the partners because it's actually a consulting-led solution. It's not a product-based solution. Right. So that's something that we'll have to work separately. All right, thanks. Okay, good. Okay, you have a question. You talk about the Puppet server. Is our satellite, satellite server, or is just Puppet? No, so the customer is actually using their own Puppet server, but yes. For customers who actually already have satellite, that can be used as a Puppet server to actually push. Can we do that with satellite server? CloudForms already has hooks to actually integrate with satellite, natively. So that's already built in. Okay, thank you. Okay, good. Thank you.