 From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020. Brought to you by Red Hat. Welcome to theCUBE's coverage of AnsibleFest 2020. We're not in person, we're virtual. I'm John Furrier, your host of theCUBE. Got a great power panel here of Red Hat engineers. We have Brad Thornton, Senior Principal Software Engineer for Ansible Networking, Adam Miller, Senior Principal Software Engineer for Security, and Jill Ralu, who's the Senior Software Engineer for Ansible Cloud. Thanks for joining me today. Appreciate it, thanks for coming on. Thanks. Good to be here. We're not in person this year, obviously because of COVID, a lot going on, but still a lot of great news coming out of AnsibleFest this year. Last year you guys launched a lot since last year. It's been awesome. Launched a new platform, the automation platform, grown the collections, certified collections community from five supported platforms to over 50. Launched a lot of automation services catalog. Brad, let's start with you. Why are customers successful with Ansible and networking? Why are customers successful with Ansible and networking? Well, let's take a step back to kind of classic network engineering, right? Lots of CLI interaction with the terminal, real opportunity for human error, managing thousands of devices from the CLI becomes very difficult. I think one of the reasons why Ansible has done well in the networking space and why a lot of network engineers find it very easy to use is because you can still see and interact with the CLI, but what we have the ability to do is pull information from the same CLI that you were using manually and show that as structured data and then let you return that structured data and push it back to the configuration. So what you get when you're using Ansible is a way to programmatically interface and do configuration management across your entire fleet. It brings consistency and stability and speed really to network configuration management. You know, one of the big hottest areas is, you know, I always ask the folks in the cloud, what's next after cloud? And pretty much unanimously it's Edge and Edge is super important around automation, Brad. What's your thoughts on, as people start thinking about, okay, I need to have Edge devices, how does automation play into that? And because networking Edge, it's kind of hand in hand there. So what's your thoughts on that? Yeah. It really depends on what infrastructure you have at the Edge. You might be deploying servers at the Edge, you may be administering IoT devices and really how you're directing that traffic either into Edge compute or back to your data center. That's, I think one of the places Ansible is going to be really critical is administering the network devices along that path from the Edge, from IoT, back to the data center or to the cloud. Jill, when you talk about cloud, what's your thoughts on that? Because when you think about cloud and multi-cloud that's coming around the horizon, you're looking at kind of the operational model. We talked about this last year around having cloud ops on-premises and in the cloud. What should customers think about when they look at the engineering challenges and the development challenges around cloud? So cloud gets used for a lot of different things, right? But if we step back, cloud just means any sort of distributed applications, whether it's on-prem in your own data center, on the Edge, in a public hosted environment and automation is critical for making those things work. When you have these complex applications that are distributed across, whether it's a rack, a data center or globally, you need a tool that can help you make sense of all of that. You've got to, we can't manage things just with, oh, everything is on one box anymore. Cloud really just means that things have been exploded out and broken up into a bunch of different pieces and there's now a lot more architectural complexity no matter where you're running that. And so I think if you step back and look at it from that perspective, you can actually apply a lot of the same approaches and philosophies to these new challenges as they come up without having to reinvent the wheel of how you think about these applications just because you're putting them in a new environment, like at the Edge or in a public cloud or on a new private on-prem solution. It's interesting, you know, I've been really loving the cloud native action lately, especially with COVID, we've seen a lot of more modern apps come out of that. If I could follow up there, how do you guys look at tools like Terraform and how does Ansible compare to that? Because you guys are very popular in the cloud configuration. Look at cloud native, Jill, your thoughts. Yeah, so Terraform and tools like that, things like cloud formation or heat in the open stack world, they do really, really great at things like deploying your apps and setting up your stack and getting them out there. And they're really focused on that problem space, which is a hard problem space that they do a fantastic job with, where Ansible tends to come in, the tool like Ansible is what do you do on day two with that application? How do you run an update? How do you manage it in the longterm? Something like 60% of the workloads or cloud spend at least on AWS is still just EC2 instances. What do you do with all of those EC2 instances once you've deployed them? Once they're in a stack, whether you're managing it, you know, whatever tool you're managing it with, Ansible is a phenomenal way of getting in there and saying, okay, I have these instances, I know about them, but maybe I just need to connect out and run an update or add a package or reconfigure a service that's running on there. And I think you can glue these things together and use Ansible with these other stack deployment based tools really, really effectively. Real quick, just a quick follow up on that. What's the big pain point for developers right now when they're looking at these tools? Because, I mean, they see the path. What are some of the pain points that they're living right now that they're trying to overcome? I think one of the problems kind of coincidentally is we have so many tools. We're in kind of a tool explosion in the cloud space right now. You could piece together as many tools to manage your stack as you have components in your stack and just making sense of what that landscape looks like right now and figuring out what are the right tools for the job I'm trying to do that can be flexible and that are not gonna box me into having to spend half of my engineering time just managing my tools and making sense of all of that is a significant effort and job on its own. Yes, too many of you and I were joking years ago in the big data surge, the tool trend we call the tool shed. After a while you don't know what's in the back, what are you using every day? People get comfortable with the right tools but the platform becomes a big part of that thinking holistically as a system. And Adam, you know, this comes back to security. There's more tools in the security space than ever before. I'm gonna talk about tool challenges. Security is the biggest tool shed. Everyone's got tools, they'll buy everything. But you got to look at what a platform looks like and developers just want to have the truth. And when you look at the configuration management piece of it, security is critical. What's your thoughts on the source of truth when it comes into play for these security appliances? Okay, so the source of truth piece is kind of an interesting one because this is gonna be very dependent on the organization, what type of Brownfield environment they've developed, what type of things that they rely on and what types of data they store there. So we have the ability for various sources of truth to come in for your inventory source and the types of information you store with that. This could be tagged information on a series of cloud instances or a series of cloud resources. This could be something you store in a network management tool or a CMDB. This could even be something that you put into a privileged access management system such as cyber arc or hash evalt, like those type of things. And because of Ansible flexibility and because of the way that everything is put together in a pluggable nature, we have the capability to actually bring in all of these components from anywhere in a Brownfield environment in a pre-existing infrastructure as well as new decisions that are being made for the enterprise as they move forward. And we can bring all that together and be that infrastructure glue, be that automation component that can tie all these disjoint loosely coupled or complete disc couple pieces together. And that's kind of part of that security posture remediation, various levels of introspection into your environment, these types of things as we go forward. And that's kind of what we're focusing on doing with this. What kind of data is stored in the source of truth? I mean, so what type of data? I mean, this could be credential, that could be single use credential access. This could be your inventory data for your systems, what target systems you're trying to do. It could be various attributes of different systems to be able to classify them and codify them in different ways. It's kind of depending, be configuration data. We have the ability with some of the work that Brad and his team are doing to actually take unstructured data, make it structured, pull it into whatever your chosen source of truth is, store it and then utilize that to kind of decompose it into different vendor specific syntax representations and those types of things. So we have a lot of different capability there as well. Brad, you were mentioned, how do you have a talk come on parsing? Can you elaborate on that? And why should network operators care about that? Yeah, welcome to 2020. We're still parsing network configuration and operational state. You know, this is an interesting one. If you had asked me years ago, did I think that we would be investing development time into parsing with Ansible network configurations? I would have said, well, I certainly hope not, right? I hope programmability of network devices and the vendors really have their APIs in order. But I think what we're seeing is network engineers are still comfortable with the command line. They're still very familiar with the command line. And when it comes time to do operational state assessment and health assessment of your network, engineers are comfortable going to the command line and running show commands. So really what we're trying to do in the parsing space is not author brand new parsing engine ourselves, but really leverage a lot of the open source tools that are already out there, bring them into Ansible. So network engineers can now harvest the critical information from operational state commands on their network devices. And then once they've got the structured data, things get really interesting because now you can do entrance criteria checks prior to doing configuration changes, right? So if you want to ensure a network device has a very particular operational state, all the BGP neighbors are up, for example, before pushing configuration changes, what we have the ability to do now is actually parse the command that you would have run from the command line, use that within a decision tree in your Ansible Playbook and only move forward with the configuration changes if the box is healthy. And then once the configuration changes are made at the end, you run those same health checks to ensure that you're in this back into a steady state and production ready. So parsing is the mechanism. It's the data that you get from the parsing that's so critical. If I had to ask you real quick, just while it's on my mind, people want to know about automation. It's the top of mind use case. What are some of these things around automation and configuration? Parsing, whether it's parsing to other configuration management. What are the big, what are the big challenges around automation? Because it's the holy grail, everyone wants it now. What are the gotchas? Where's the hotspots that need to be jumped on and managed carefully? Or the easiest low-hanging fruit? Yeah, I think, well, there's really two pieces to it, right? There's the technology and then there's the culture. And we talk really about a culture of automation, bringing the team with you as you move into automation, ensuring that everybody has the tools and they're familiar with how automation is going to work and how their day job is going to change because of automation. So I think once the organization embraces automation and the culture is in place, on the technology side, low-hanging fruit, automation can be as simple as just using Ansible to push the commands that you would have previously pushed to the device. And then as your organization matures and you mature along this kind of path of network automation, you're dealing with larger pieces, larger sections of the configuration. And I think over time, network engineers will become data managers, right? Because they become less concerned about the network, the vendor-specific configuration and they're really managing the data that makes up the configuration. And I think once you hit that part, you've won at automation because you can move forward with Ansible resource modules. You're well-positioned to do NetConf or REST Conf or right once you kind of grok that it's the data that we need to be concerned about in the configuration and the operational state management piece, you're going to go through a transformation on the networking side. So I mentioned... And one thing to note there, if I may, I feel like a piece of this too is you're able to actually bridge teams because of the capability of Ansible, the breadth of technologies that we've integrations with and our ability to actually bridge that gap between different technologies, different teams, once you have that culture of automation, you can start to realize these DevOps and DevSecOps workflow styles that are top of everybody's mind these days. And that's something that I think is very powerful. And I like to try to preach when I have the opportunity to talk to folks about what we can do and the fact that we have so much capability and so many integrations across the entire industry. As a great point, DevSecOps is totally hop on you when you have software and hardware, it becomes interesting. There's a variety of different equipment on the security automation. What kind of security appliances can you guys automate? So as of today, we are able to do endpoint management systems, enterprise firewalls, security information and event management systems. We're able to do security orchestration, automation remediation systems, privileged access management systems. We're doing some threat intelligent platforms and we've recently added to the, I'm sorry, did I say intrusion detection? We have intrusion detection prevention and we recently added endpoint security management. Huge value there, unless everyone wants it. Jill, I got to ask you about the cloud because the modules came up. What use cases do you see the Ansible modules for the public cloud? Because you got a lot of cloud native folks in public cloud. You got enterprises lifting and shifting. There's a hybrid and multi-cloud horizon here. What's some of the use cases where you see those Ansible modules fitting well with public cloud? The modules that we have in public cloud can work across all of those things. In our public clouds, we have support for Amazon Web Services, Azure, GCP and they all support your main services. You can spin up a Lambda, you can deploy ECS clusters, build AMIs, all of those things. And then once you get all of that up there, especially looking at AWS, which is where I spend the most time, you get all your EC2 instances up. You can now pull that back down into Ansible, build an inventory from that and seamlessly then use Ansible to manage those instances, whether they're running Linux or Windows or whatever distro you might have them running, we can go straight from having deployed all of those services and resources to managing them and going between your instances and your traditional operating system management for those instances and your cloud services. And if you've got multiple clouds or if you still have on-prem or if you need to, for some reason, add those remote cloud instances into some sort of on-prem hardware load balancer security endpoint, we can go between all of those things and glue everything together fairly seamlessly. You can put all of that into tower and have one kind of view of your cloud and your hardware and your on-prem and being able to move things between them. Just put some color commentary on what that means for the customer in terms of is it pain reduction, time savings, how would you classify the value? I mean, both, instead of having to go between a number of different tools and say, oh, well, for my on-prem, I have to use this, but as soon as I shift over to a cloud, I have to use these tools and, oh, I can't manage my Linux instances with this tool that only knows how to speak to the EC2 API, you can use one tool for all of these things. So like we were saying, bring all of your different teams together, give them one tool and one view for managing everything end to end. I think that's pretty killer. All right, now I get to the fun part. I want you guys to weigh in on the Kubernetes, Adam. If you can start with you, we'll start with you. Go in and tell us why is Kubernetes more important now? What does it mean? A lot of hype continues to be out there. What's the real meat around Kubernetes? What's going on? I think the big thing is the modernization of the application development delivery. When you talk about Kubernetes and OpenShift and the capabilities we have there and you talk about the architectures you can build, a lot of the tooling that you used to have to maintain to be able to deliver a sophisticated resilient architectures in your application stack are now baked into the actual platform. So the container platform itself takes care of that for you and removes that complexity from your operations team, from your development team. And then they can actually start to use these primitives and kind of achieve what the Cloud Native Compute Foundation keeps calling cloud native applications and the ability to develop and do this in a way that you are able to kind of like take yourself out of some of the components you used to have to babysit a lot. And that comes in also with the OpenShift operator framework that came out of originally CoreOS. And if you go to operator hub, you're able to see these full lifecycle management stacks of infrastructure components that you no longer have to actually maintain a large portion of what you set to do. And so the operator SDK itself are actually developing these operators. Ansible is one of the automation capabilities. So there's currently three supported. There's Ansible, there's a one that you just have full access to the Golang API and then Helm charts. So Ansible specifically obviously being where we focus, so we have our collection content for the Kubernetes.Core and then also Red Hat.OpenShift certified collections coming out in I think a month or so. Don't hold me to the timeline. I'm sure I'm gonna get in trouble for that one. But we have those things gonna come out, those gonna be baked into the operator SDK that we fully supported by our customer base. And then we can actually start utilizing the Ansible expertise of your operations team to container native of PHY the infrastructure components that you wanna put into this new platform. And then Ansible itself is able to build that capability of automating the entire Kubernetes or OpenShift cluster in a way that allows you to go into a brownfield environment and automate your existing infrastructure along with your more container native futuristic next generation infrastructure. Jill, this brings up the question, why don't just use native public cloud resources versus Kubernetes and Ansible? What should people know about where you use that those resources? Well, and it's kind of what Adam was saying with all of those brownfield deployments. And to the same point, how many workloads are still running just in EC2 instances or VMs on the cloud? There's still a lot of tech out there that is not ready to be made fully cloud native or containerized or broken up. And with OpenShift, it's one more layer that lets you put everything into a kind of single environment instead of having to break things up and say, oh, well, this application has to go here and this application has to be in this environment. You can do that across a public cloud and use a little of this component and a little of that component. But if you can bring everything together in OpenShift and manage it all with the same tools on the same platform, it simplifies the landscape of I need to care about all of these things and look at all of these different things and keep track of these and are my tools all gonna work together and are my tools secure? Anytime you can simplify that part of your infrastructure, I think is a big win. You know, I think- One thing if I may, Jill spoke to this, I think in the way that a architectural, you know, infrastructure person would, but I wanna try to really quick take the business analyst component of it is the hybrid component. If you're trying to address multiple footprints, both on-prem, off-prem, multiple public clouds, if you're running OpenShift across all of them, you have that single consistent deployment and development footprint for everywhere. So I don't disagree with anything they said. I just wanted to focus specifically on, you know, that piece is something that I find personally unique as that was a problem for me in a past life. And that kind of speaks to me. Well, speaking of past lives- I mean, you're just outing me as an infrastructure person. Thank you. Yeah, well, I mean, three of the past lives, OpenStack. You know, you look at Jill with OpenStack, you know, we've been covering the queue, I think when OpenStack was rolling out back in the day, but you also have private cloud, right? There's a lot of private cloud out there. How do you talk about that? How do people understand this in public cloud versus the private cloud aspect of Ansible? Yeah, I think there is still a lot of private cloud out there, and I don't think that's a bad thing. I mean, I've kind of moved over onto the public cloud side of things, but there are still a lot of use cases that a lot of different industries and companies have that don't make sense for putting into public cloud. So you still have a lot of these on-prem OpenShift and on-prem OpenStack deployments that make a ton of sense and that are solving a bunch of problems for these folks. And I think they can all work together. You know, we have Ansible that can support both of those, you know. If you're a telco, you're not gonna put, you know, your network function virtualization on, you know, US East One in spot instances, right? When you call 911, you don't want that going to the public cloud. You want that to be on dedicated infrastructure that's, you know, reliable and well-managed and engineered for that use case. So I think we're gonna see a lot of, you know, ongoing OpenStack and on-prem OpenShift, especially with Edge enabling those types of use cases for a long time. And I think that's great. I totally agree with you. I think private cloud is not a bad thing at all. I think it's only going to accelerate my opinion. You look at the VMware all day talked about telco cloud. And you mentioned Edge when 5G comes out, you're going to basically have private clouds everywhere. I think that's my opinion. But anyway, speaking of VMware, could you talk about the Ansible VMware module real quick? Yeah, so we have a new collection that we'll be debuting at Ansible Fest this year for the VMware REST API. So the existing VMware modules that we have use the SOAP API for VMware and they rely on an external Python library that VMware provides. But with vSphere 6.0, and then really, especially in vSphere 6.5, VMware has stepped up with a REST API endpoint that we find is a lot more performance and offers a lot of options. So we built a new collection of VMware modules that will take advantage of that. That's brand new. It's a lighter weight. It's much faster. We'll get better performance out of it. Reduced external requirements. You can install it and get started faster. And especially with vSphere 7, continuing to build on this REST API, we're going to see more and more interfaces being exposed that we can take advantage. So we plan to expand it as new interfaces are being exposed in that API. It's compatible with all of the existing modules. You can go back and forth, use your existing playbooks and start introducing these. But I think especially on the performance side, and especially as we get these larger clouds and more cloud deployments, edge clouds where you have these private clouds in lots and lots of different places, the performance benefits of this new collection that we're trying to build is going to be really, really powerful for a lot of folks. Awesome. Brad, we didn't forget about you. We want to bring you back in. Network automation has moved towards the resource modules. Why should people care about them? Yeah, resource modules, excuse me, probably I think having been a network engineer for so long, I think some of the most exciting work that has gone into Ansible Network over the past year and a half. What the resource modules really do for you is they will reach out to network devices. They will pull back that network native, that vendor native configuration where the resource module actually does the parsing for you. So there's none of that with the resource modules and we return structured data back to the user that represents the configuration. Going back to your question about source of truth, you can take that structured data, maybe for your interface config, your OSPF config, your access list config, and you can store that data in your source of truth. And then where you are moving forward is you really spend time as an network engineer managing the data that makes up the configuration. And you can share that data across different platforms. So if you were to look at a lot of the resource modules, the data model that they support, it's fairly consistent between vendors. As an example, I can pull OSPF configuration from one vendor and with very small changes, push that OSPF configuration to a different vendors platform. So really what we've tried to do with the resource modules is kind of normalize the data model across vendors. It'll never be a hundred percent because there's functionality that exists in one platform that doesn't exist and that's exposed through the configuration. But where we could, we have normalized the data model. So I think it's really, it's introducing the concept of network configuration management through data management and not through CLI commands anymore. Yeah, that's a great point. It just expands the network automation vision. One of the things that's interesting here in this panel is you're talking about cloud, holistically, public, multi-cloud, private, hybrid, security, network automation as a platform, not just the tools. You're still going to have tools out there. And then the importance of automating the edge. I mean, that's a network game, Brad. I mean, it's a data problem, right? I mean, we all know about networking, moving packets from here to there, but automating the data is critical. You have bad data and you don't have, if you have misinformation, sounds like our current politics, but bad information is bad automation. I mean, what's your thoughts? How do you share that concept to developers out there? What should they be thinking about in terms of the data quality? Yeah, and that's, I mean, I think that is, I think that's the next thing we have to tackle as network engineers. It's not, do I have access to the data? You can get the data now for resource modules. You can get the data from NetCon, from RESTCon. You can get it from OpenConfig. You can get it from parsing. The question really is, how do you ensure the integrity and the quality of the data that is making up your configurations and the consistency of the data that you're using to look at operational state? And I think this is where the source of truth really becomes important. If you look at Git as a viable source of truth, you've got all the tools and the mechanisms within Git to use that as your source of truth for network configuration. So network engineers are actually becoming developers in the sense that they're using a GitOps workflow to manage configuration moving forward. It's just really exciting to see that transformation happen. Well, great panel. Thanks for everyone coming on. I appreciate it. End this by saying, if you guys could just quickly summarize Ansible Fest 2020 virtual, what should people walk away with? What should your customers walk away with this year? What's the key point? Jill will start with you. Hopefully folks will walk away with the idea that the Ansible community includes so many different folks from all over solving lots of different interesting problems and that we can all come together and work together to solve those problems in a way that is much more effective than if we were all trying to solve them individually ourselves by bringing those problems out into the open and working together, we get a lot done. Awesome. Brad? I'm gonna go with collections, collections, collections. We introduced them last year. This year they are real. Ansible 2.10 that just came out is made up of collections. We've got certified collections on Automation Hub. We've got cloud collections, network collections. So they are here. They are the real thing. I think it just gets better and deeper and more content moving forward. All right, Adam. Oh, going last is difficult, especially following these two. They covered a lot of ground and I don't really know that I have much to add beyond the fact that when you think about Ansible, don't think about it in a single context. It is a complete automation solution. The capability that we have is very extensible. It's very pluggable, which is a standing ovation to the collections and the solutions that we can come up with collectively, thanks to ourselves, everybody in the community is almost infinite. A few years ago, one of the core engineers did a keynote speech using Ansible to automate Phillips Hue light bulbs. This is what we're capable of. We can automate the Fortune 500 data centers and telco networks, and then we can also automate random IoT devices around your house. We have a lot of capability here. What we can do with the platform is very, very unique and something special. And it's very much thanks to the community, the team, the open source development way, had just, yeah. Doing everything out there, open source of truth, being collaborative, all is what it makes happen. And Dev, Ops and Sec all happening together. Thanks for the insight. Appreciate the time. Thank you. Thank you. I'm John Furrier watching theCUBE here for Ansible Fest 2020 virtual. Thanks for watching.