 Hi, thank you for joining us for this panel discussion. My name is Bill Giard. I'm a principal engineer at Intel. I help lead our app and software efforts for cloud development. And with me, I've got various different experts in their industry around open stack deployments and what they're doing at their local company. So we'll just kind of start with an open introduction, and then we'll get into some discussions around how can we accelerate cloud deployments and address the app development needs and the evolution of where we're going from an infrastructure deployment to accelerated app delivery and how that landscape and industry is merging. So Ruchi? Hi, I'm Ruchi Bhargav. I work for Comcast, run their infrastructure services team for the technology and product group. And I've been there a month. And prior to that, I was working at Intel, where I was responsible both for deploying open stack based cloud at Intel IT, as well as doing open stack development within the software group. Hello, my name is Chris Morgan. I work for Bloomberg in New York City. I run the cloud services team. We operate, I think, 11 open stack clusters. And we're really in the transitional phase from early adopters have been using us for two, three years. But we're now having regular development groups kind of encouraged onto our platform. And we're dealing with those issues. So we have a spectrum from very cloud native to less cloud native. Hi, everybody. My name is Edgar McGanyam. I'm a cloud operations architect and senior principal at Word Day. I'm also part of the board of directors of the Open Stack Foundation and member of the user committee. At Word Day, we deployed Open Stack since two years ago. We have multiple different applications running on them. We're exploring the containers part. And we are working on making the pipeline from right in my piece of code all the way to production as short as possible. Awesome. So I think from a cloud enablement perspective within an industry, and what we're seeing in a lot of organizations is this evolution from easy resource consumption, which is getting quite good, to the area of accelerated app software development and engaging the app dev teams and dev services. Maybe you guys can share your insights around at least your internal journey around moving from infrastructure capabilities to infrastructure plus developer services or app workdays. How are your developers consuming Open Stack? Sure. So I think the way that we've got a lot of people on the platform is, as you said, infrastructure on demand. So everybody at Bloomberg in our 4,000 developer community can just have some resources for free. So initially, the developer community jumped on it just to have development resources. Like you can go run a compiler or a tool or practice your make files. But then some of them rapidly adopted more cloud-native things and started to do CI, CD. And actually, some of the software, even if it doesn't run on Open Stack, moves through our clusters. So we're encouraging to come in by making it very easy. There's frictionless to just get some resources and play with it. So then now that's starting to feed into the way we do all of our software. So it's kind of an organic growth based on that. Free sample. Awesome. So at Comcast, from what I have learned in the last month, the development community is fairly savvy. And those who are not who are moving from traditional environments, they have access to a whole bunch of support from the technology teams who either help them design their applications to be cloud-aware or cloud-native. Not so much cloud-optimized. But then if an application requires optimization, they work with vendors to get them further optimized. So they're doing a mix of both. They're doing a mix of both. And it's one of the nice parts about the Open Stack architecture is you can run your traditional applications on a VM and then grow your cloud-native or mobile application development on top of that. Well, and one that we have to solve a problem is that actually there were so many different application developer teams. And all of them, they were running their own methodology to run those application production. They were using different platforms. Some of them, they were using configuration management. They were using different kinds of configuration management. Some of them, they were using Puppet, Chef, and other things. Some people, they actually wrap it up in an RPM file, give it to infrastructure, and you figure out how to put it in production. With Open Stack, we actually built a middle word in the application developer. So we actually, we simplify the APIs called from the application developers all the way to the Open Stack APIs. Normally, you will call maybe a hit template that is already our orchestration system. And it will tell you, OK, I'm going to need a couple of VMs for the UI, a couple of VMs for the database, and then web server and all those kind of multi-tier connected things, et cetera, right? But all that's still complicated for an application developer that the only thing that they don't is writing code for solving a specific problem. So with that simplification of the API, what we do it is, like, give me your RPM, push the button, we take care of that. And that take care of that is security policies. Everything is predefined. Now, what are the specific case is a private cloud. So we take advantage of that. We don't need to provide the APIs to a public space to actually create VMs on demands and et cetera, right? We actually have everything controlled in our private space. Yeah, we see the security driver as being a key critical decision for a lot of different app workloads. Can you talk about at least your observations around security in the cloud, in a private cloud, in a software-defined Open Stack environment? Good, bad? For us, to be able to do Open Stack at all for Bloomberg, we, first of all, we had to basically comply with the existing security model, which means there's a lot of networks that are all segregated. We had to go in and build small clusters in each one of those. And we were able to do that in a great deal of really expense and effort. And now we've actually built some credibility. So the next deployments we do will probably be converged. And we have to work with our network team. But basically, they are now on board with cloud. So we can actually do some production work and some development work on the same cloud that we intend to build. But we are in transition, as I say. Initially, they had no trust in us. So I had to go and put an Open Stack Cloud in the production network and in the dive network. We did that. But that was a way to build credibility. What about what you've seen at Comcast or Intel? So Comcast has a really well-defined security zone structure, so depending on what kind of an application you're designing, there are guidelines which the application developers already know. So it has been a well-oiled machine there. Well, for us, we manage human resources information of our customers, payroll. Nobody knows what happened there. So some of the requirements is actually we have at the port level security policies we enforce with SDN. And then we have to coordinate with the core firewall policies and sometimes with the H firewall policies. Because some of the applications that we develop will let our customers run some of the customized code in some of this app. And these apps, they live in the same cloud. Yes, it's multi-tenant, but still it's in the same cloud. So yeah, we enforce security policies. We coordinate with our core firewalls to simplify the case. But it's still, I don't think it's a problem that is 100% solved. It's still required a ticket for the core firewall. It still requires a ticket for the H firewall. It's still required knowing how to write security policies at the VM level, virtual network level. So I think we still need to work on that part to make it like a simple one ticket, get it done in an hour. VM is connecting to whatever needs to be connected. It's not connecting to whatever it's supposed to not do. Yeah, within Intel, within our IT implementation, we're actually seeing a significant improvement in our security management practices by going software-defined network and even cloud-native apps. One of the big challenges with security really is in privileged access management, misconfiguration, or software vulnerabilities. And as we uplift our code and make them more cloud-aware and take advantage of cloud architectures, we're seeing just a significant reduction in our software vulnerabilities that get deployed in that landscape. So it's kind of merging our cloud strategy and our security strategy. One is helping the other, which is nice. In fact, in Intel IT, I believe it's now in production, there's this capability what they call Automate IT, which is addressing what you were talking about at Workday, where it takes so many different steps to get an application deployed on the internet. And that basically helped Intel IT application developers to bridge that gap from 15 days or months to a day or so. While keeping enforcing security policies consistently, right, consistent deployment. So it's not only get the fast time to market, but security implications as well. So what about kind of your developer feedback? So I think one of the common challenges that we hear is we've deployed infrastructure, we've deployed OpenStack, we've got kind of the operations, engineering things working well. But our developers are still having requests for new capabilities. And so meeting those new different markets of what are your app teams asking for? So we have fantastic feedback on features and control. You can create your own VM, you can size it appropriately. If you get it wrong, you can tear it down, make a new one. You can use orchestration systems. What we need to do is probably tackle higher performance cases. In our particular case, our storage is all set. So it's very durable. We can administer it, but it doesn't have the highest IOPS. So our real-time database stuff cannot, it can run on for development, but not for production. So we wanna increase performance. We also have people come to us who just want enormous resources like a machine with a terabyte of RAM that's bigger than our actual underlying hypervisors. So we have to kind of do a lot of work on our platform to hit our most high performance users. So you do a hybrid kind of model where you have some bare metal systems and then you've got some hypervisor. Absolutely. Someone comes to me and says, I need a three gigahertz CPU and I need a huge amount of RAM. I'm like, but go by hardware with our blessing. But what we're doing is trying to, first of all, the low-hanging fruit of apps that should be on cloud, stop buying more machines, keep the power budget more stable. And that's actually, we're already demonstrating a reduced trajectory of power. I wouldn't say we're losing less, but we're growing less. Interesting. Yeah, I just want to add performance, of course. It's what they asked the most of the time. Before we move an application with legacy, how we're bare metal into a virtualized environment, the very first thing we do is we will compare how the performance degradation in that one. We know that moving from bare metal in the same kind of scenario, it will be between 12, 15 performance degradation. Like the amount of time that the specific job with the insane number of cores, the same amount of memory we'll take in the bare metal compared with a virtual machine. So there should be a certain trade-off, and that's what I talked to the application developers. If we work in virtual machines, we're gonna get all these benefits, you're not making use of all of them. So you have to scale your application. You need to think about it like, you don't have any more one server that you need to take it off. Now we can have like 10 different virtual machines, the same thing, right, cattle versus pet. Now design your application to scale horizontally and think about that we will give you 10, 15, as many VMs as you want. If on the top of that you're still having performance degradation that you can't afford, we're moving into containers. So that's helping us on that one. But the real usage and performance will be when you run the containers directly on bare metal, right? So you may use Ironi to deploy provisioning a bare metal server and then you use Magno to actually run a Kubernetes cluster on the top of that. That will be when we're moving forward. That's what the application developers are asking for. We have the same, I think, performance is three, it's driven both by the application developers and by our own optimization needs. So that app developers want performance, they want reliability. And that's something which is always a struggle to maintain with using Ceph. Our performance issues are, as everybody knows, there are plenty. And with respect to running an infrastructure which is highly optimized, managing the utilization, it's not an easy thing with OpenStack today. So driving towards a highly utilized, highly optimized, well-performing cloud is the biggest challenge from app developers. Tidely coupled with the request for performance ends up becoming telemetry or visibility around how the system is working. And so we've seen, at least in our internal Intel IT implementations, a lot of work to try to provide additional analytics, traditional telemetry frameworks around how things are performing, usage statistics. And we've got capabilities that we're working on to help improve telemetry with things like our SNAP capabilities for an open telemetry framework. And so what are you guys doing in that space to help expose and address some of the performance? So we've actually looked at SNAP and we are very keen on going and playing with it and using it. And I mean, we just spoke to them a couple of weeks ago. Right. We're currently using a mix of Graphite, Zabix, these kinds of things. The guy who does that isn't here, but he's looking at Prometheus, which I think is an open source project spun off from a Google thing. One thing we are finding is OpenStack is great for building such things, like we have apps that haven't moved off, big iron, but their dashboards are being built in a much more modern way. So at least half the team is getting the benefit of what we're giving them with infrastructure on demand and software defined applications. No, just pretty much the same. Connecting to login system, connecting to monitoring systems, all the applications they send, all the logs to one place. So we have elastic search, so the knock, what's going on at certain moments. Because most of the applications that we have right now are still stateless, if some of the application actually dies or something happens, the customer should not have any impact. So there will not be a customer ticket. But our Network Operations Center should say, OK, this VM has a degradation, something is going on, or maybe the hypervisor. We deploy all these in availability zones, so we guarantee that even if the server goes down, sometimes even if the rack goes down, we will have a VM with the same application running in a different place. However, we need to have a very quick reaction time to spend up another VM. We still don't have that functionality like in another cloud management system that you can reposition a VM if it dies in a certain moment at a certain time. It's like a meta scheduler for OpenStack, which is still an idea that so many projects are thinking about it. Yeah, I think the pace of delivery that we see in app development is increasing quite significantly. When we first started doing OpenStack, we were targeting, hey, get a VM and reduce it from weeks to a day. And then we said, hey, day to an hour, and now it's in a minute. And then the app teams are saying, deploy my code in a minute, not just get a VM in a minute. And so the nice thing about the different sets of requests that we get for database on demand, platform on demand is just moving faster. Are you seeing that pace of innovation just increasing demand from your app teams as well? And what are your thoughts on that in the last couple of minutes? So at Comcast, a lot of the apps are definitely, they are either cloud aware or cloud native. And so they do develop and they are very high-performing apps, which most of the consumers in the US, like the video apps and stuff. And they are truly hybrid. They are part of them are on our private cloud. Some of them are on the public cloud. And they all are designed for failure. So we don't get the need to deploy it in a minute. But ideally, that would be great for some quick app developers' needs. Yeah, especially with microservices, smaller apps coming up, yeah. That's the key word, right? Microservices. I was talking about the trade-off, right? The trade-off that we are working with the application developers, now they get it. They need to microsegment in those applications. So they require things to be more and faster. But on the other hand, it's easier to actually deploy 50 megabytes of a code, one gigabyte, than it used to be in the past. We just have a spectrum. We honestly have some people still building pets. I compile my application. I deploy my application. And my application runs. Other people have fully automated pipelines for scaling up Spark and Hadoop clusters on top of OpenStack to run like a large machine learning job or then tear it all down many times a week. So we support that entire range. And my boss would prefer if everyone moved to the CICD and cattle not pets. But we can't move our entire development organization at a fixed pace. So luckily, we allow that spectrum. And it's very, very popular. Yeah, I think the nice part, at least what I'm observing is the ability to support both the new stuff that custom configuration the pets versus the common standard web configurations and getting that on an open common infrastructure that you can manage and then go deploy. So I think to just wrap up what I'm hearing, we see a lot of a pace of innovation happening with the dev teams, request for demands, a mix of traditional cloud native applications and really this merge from what amounts to a infrastructure-focused cloud deployment into a line of business plus infrastructure alignment around how do you speed response to the business. Is that accurate from what I'm hearing? Fairly accurate. Yeah, definitely. OK, any other final thoughts in the last couple of seconds? I just want to say that the only recommendation is working very well for us. Cross-functional teams work very well. So don't design an infrastructure or a platform as a service without talking to your application developers. So you need to get them into a common disk process. Yeah, I think the common best practice that I've seen where organizations have really accelerated their deployment is they've engaged it as both an app and an infrastructure team engagement and they're in strong arm and arm. So I want to thank you for sharing at least your journey and insights around the app and how to get your app and line of business teams engaged with your infrastructure team. Thank you. Thank you, Bill. Thank you, everybody. Have a good day.