 Hello, and welcome to another Tech and Talk, Diane Mueller with Red Hat, and I run the OpenShift Commons and do a lot of stuff in the cloud native space. And I've been doing these Tech and Talks for about the past three months now and trying to bring in interesting folks from different aspects of the cloud native world and Kubernetes world. And one of the people that I've most admired for a number of years is Ann Curry, who's with us here today. And she's been working with the gang at Container Solutions and doing a lot of research on cloud native use cases. And I was gonna get her to unravel the mystery of cloud native today. So, Ann, if you could introduce yourself and give us your presentation, we're gonna have a little Q&A at the end of this too. So stay tuned to the end. Hi, thanks, Diane. So my name is Ann Curry and I've met Diane at many conferences in the past. It's always been a pleasure to chat about what's going on in the industry. So I, like Diane, I've been in the industry for quite a long time. So I've been around for more than 20 years, which means that, as an engineer and as a manager, I'm doing various things, which means that you end up seeing lots and lots of new ideas and fads of fashions come and go and then often come again and then go again and then come again. So I was extremely interested in the phenomenon of cloud native. So I've been involved in containers and orchestrators and investigating them for various and involved with them in the startup world for a couple of years. And I was very interested in the way that cloud native was pulling those concepts together, containers and orchestrators and microservices. And I wanted to know a little bit more about it. So the past six months, I've been working very closely with the folk at Container Solutions and they've been sponsoring and cooperating with me on producing a book, which is what's going on in the cloud native world. So yes, for me, cloud native is not a very clear concept. There's loads of things going on in it and it's really not clear what people mean by it. And in fact, I think if you talk to 10 different people, you'll get 10 different meanings of cloud native. And I really wanted to try and pull it down. So I hate that. I like to have things to be a bit clearer. I like to have things clear in my mind, at least like, okay, there are five meanings and different people mean this by it. So what I wanted to do was kind of ask the question, what is cloud native? And not just from vendors, because as a startup I end up spending a lot of time talking to vendors, I ended up and I used to work for a vendor, but I've also worked for customers and users of technology. And in the end, there's often quite a difference between how vendors think technology is going to be used and how customers actually do use it and what they're interested in. And I think cloud native is one of those things where there's potentially, there's the possibility of quite significant disconnects between what we as academic type vendors and researchers think about what cloud native is going to be used for versus what is it actually used for. And so that was really what I wanted to go out and talk about. The where I started from was the CNCF, the cloud native computing foundation definition of cloud native, which is container packaged, dynamically managed and microservice oriented architectures. And that's quite good. I mean, that describes, so that describes the how, doesn't it? That describes a set of tools, which is containers, Docker or equivalent, but probably mostly Docker. Dynamic management is really orchestrators like Kubernetes or Swarm or Mazos. And microservice oriented is just on microservice architectures, distributed systems, small decoupled, single use applications. And that is all cool. And there's no doubt that that stuff is all very cool. And I've used it myself and I found it very good. It's a really, really interesting set of technologies together. But being cool to my mind is not sufficient reason to use it because they're also quite difficult technologies to get your head around. There's lots of stuff for people to learn there. Lots of training would have to go on. Lots of potential rearchitecture. It's expensive. And it kind of cuts against what a lot of companies already have, which is a monolith. And we didn't choose to go with monoliths because we were fools. We went with monoliths because they were easier to operate. They were easy to manage. They were easy to develop in the first place. So you don't necessarily want to throw them out the window just because some new technology looks cool. You need to have a more definite reason for using that technology. So what I was interested in is what is the reason? Now, theoretically, when I look at those technologies, and I'll talk a little bit more about this as we go on, when I look at the technologies of containers and orchestrators and microservices, I can see three potential reasons why companies might use it. You can definitely use it to go faster, by which I mean increase your development speed. So feature velocity. Minimize the time between an idea coming into somebody's head and actually being in front of customers. That's certainly one use case for these technologies together. Another use case is scale, which is that you can have much, much bigger systems. So you can effectively, by scale, I mean that your systems can potentially handle more users in more geographic regions without impacting them. Without impacting your SLA. So that's from by scale. And also margin, they have, using these technologies together could have the benefit of, and theoretically, certainly through it, theoretically, and I've seen it, be able to cut your hosting costs quite significantly. But the interesting thing about these three use cases is that I wouldn't approach them in the same way. I would use cloud native, I would use containers, I would use orchestrators, I would use microservices to attack all three of these goals, but I wouldn't do it in the same way. So that's quite interesting. So what is it for? Is it for these three goals? If I wouldn't actually use the technologies in the same way to achieve them? And that was the question that I had in my mind when we started working on this book. And we really wanted to go out and interview a lot of people and find out what they were really using for. It was all of those use cases, perfectly good use cases. In advance, I didn't know which would be the one. In fact, I spent two years of my life basing my assumptions on margin being the thing that people were interested in, but it turned out it wasn't. Which goes to show that you can't assume anything. So if you go to the CNCF, they have a definition for cloud native, which is distributed systems capable of scaling to tens of thousands of self-healing multi-tenant nodes. Now, to me, that implies, or I strongly infer, that they believe that the use case for cloud native is scale because tens of thousands of self-healing multi-tenant nodes, that's as far as I'm concerned, that's all about scale. You know, we haven't said it's cheaper. They haven't said that it's faster to deploy things. They've said that you can run really big systems. So they seem to be nailing their colors to the mast of scale being the reason why people will use CNCF, not CNCF, cloud native. But I really wanted to know whether that was actually correct. And so I went out and spoke to quite a lot of users. Now, when it comes to conferences, a lot of the users who are ahead on cloud native are hyperscale users, like Google or Facebook or Netflix. And when they talk about cloud native, they are generally talking about using it to achieve scale and to cut their hosting costs. And that would tend to point you towards the CNCF definition in my mind, you know, it's all about scale. But actually, you know, there aren't that many of us who are huge hyperscale companies. That isn't the normal way. That isn't the average company. They happen to be big companies and they happen to be companies that have a lot of money and they happen to do companies that speak a lot at conferences. But on average, most companies are not hyperscale. Now, I didn't want to go out and talk to, well, I did actually. I talked to quite a lot of startups and quite a lot of medium-sized enterprises. I mean, I still mean that they were global, millions of users, enterprises, but they were not as big as Netflix, for example. And I asked them, and I went out and found people who were using a cloud native approach and maybe and often have been for years and years and years before it was actually called cloud native, but they were using containers, they were using orchestrators and they were using microservices. And when I asked them why, they often came up with an analogy called OODA loops. Well, with a statement, which is we wanted to move faster. We wanted to move faster and we were very inspired by work like Captain, Colonel John Boyd during the Korean War about OODA loops, which is all about going faster, deploying faster, making changes. And his famous quote was speed of iteration beats quality of iteration. You just need to, if you want to win, if you want to be successful, you just need to take a lot of shots at a goal. If you want goals, take a lot of shots at goal. So I thought that was very interesting. So they, everybody I spoke to who wasn't hyperscale, which let's face it, is the vast majority of us, where they were using cloud native, they were using it for feature velocity. They're using it to go faster. They then subsequently always used it to scale better and to reduce their hosting costs to improve their margin. But that wasn't the reason why they had chosen it. The reason they chose it was in order to put features live more quickly. And they made architectural choices that were slightly different to the architectural choices you'd make for scale or margin, as a result of the fact that actually that wasn't their primary concern. Their primary concern was to go faster. So my conclusion from talking to all of these folk, this is what, this was companies of all sizes, really up to, but not including hyperscale, was that they were really interested in cloud native for speed, not for scale or margin, unfortunately. Well, that's probably, yeah. There we go. Yes, and I learned from my startup, not from margin. But I did conclude that CNTF were right about, well, they were mostly right about the technology. They were right. Everybody was using containers. Well, actually the people who were on Windows were thinking about containers, but hadn't quite moved there yet. But containers all contain a like function, orchestrators, microservices, but something that isn't mentioned by CNTF. I think just because they're assuming it, but I hate assumptions. I like to make things explicit is vast automation. So in the same way that's, that's got to prep a wall before you paint it, you've got to have really good automation in place before you can effectively use containers, orchestrators and microservices to live on any of those goals. You won't go fast, you won't scale and you won't reduce your margin if you don't have very good underlying scripted automation or programmed automation. And of course, I mean, I guess this is kind of implicit in the name, but I don't think it's, again, I think you always need to be explicit. Everybody who was doing this successfully was running on flexible infrastructure, by which I mean cloud. Now, you can build your own cloud, but nobody was doing that. I think the hyperscale folk sometimes do that, sometimes don't. I mean, Google have, Netflix haven't, Apple, Facebook have, but not everybody. Even in that hyperscale group, we've got one key player who was running on AWS. Everybody I spoke to was running on a cloud. They were not often, they had started building their own private cloud, but realized that that required an enormous amount of energy and effort from them. And it was not a business differentiator for them. So for example, I spoke to The Financial Times, who are probably the most successful online newspaper at least in the UK. They started with their own private cloud, but moved to AWS because, and actually started to use more and more AWS services because they realized that that was, that was just far better for their business. There was no need for them to reinvent that particular wheel. So everybody was using it for going faster. What, how were they doing that? What were they, what particular aspects of these technologies that they're using? Well, for containers, it was really, the containerization makes it easier to automate your DevOps handovers. So fundamentally, they've all got good value out of containerizing applications in order to automate the DevOps handovers really, reduce friction there. Orchestration, although orchestrators are phenomenally powerful tool in production for minimizing your hosting costs and improving your scalability and resilience. Primarily, that was originally why people were using them. They were really using them now in order to go faster, which means that they were really using them to get their applications into production faster. So they were using things like rolling updates with Kubernetes. And that was quite interesting. If it slightly feels like using a sledgehammer to crack an arts, I'm a bit disappointed. Eventually, everybody did then use their orchestrators for what they're actually designed for, which is improved runtime automation. But in this context, really, they were using their orchestrators to get things deployed more easily. Microservices, again, massively powerful potential in there as an architecture for scalability and for cost cutting. But initially, people were using it in order to parallelize their development so that rather than have a large team all chipping away at one monolith and constantly getting into it in one another's way and having an awful lot of code to learn, they were breaking everything down into smaller packages that were easier to learn and where multiple teams could be working on those packages simultaneously and then deploying, provided they had solid APIs. So none of this is easy, but they were doing it and they were getting good results and they were very happy because they could deploy much faster. And I saw people, so the FT, for example, they improved their deployment speed by a factor of 10,000. Now it took them five years to do that, but they did do it and they were very happy with improving their, speeding up the deployment by a factor of 10,000. So there's definitely, it's a lot of effort, a lot of pain, a lot of investment that they had to make to get to this point, but the payoff for them is absolutely huge. They really cared about that. Automation, obviously, you can't do these speedy DevOps handovers without automating those speedy DevOps handovers and cloud, it's easy to forget that 10 years ago, you might have to wait six months for a new machine to appear and that really slowed you down quite a lot. If you can just turn on a new machine, get out your credit card and you've got your new machine there in the cloud, that again causes you to go a lot faster and all of these folk had moved into the cloud more or less for that reason. So that was my conclusion that actually there are lots and lots of different reasons why you might want to use cloud native. It certainly can and does deliver feature velocity and scale and margin, but for smaller businesses and non-hyperscale businesses, they all seem to be seeing it as delivering speed primarily and the others is nice to have. But I don't know, Diane, what's your take on that? Is that what you're saying? I agree. It's interesting to see what the research came up with because we are, we do look at the Ubers, the Facebooks, the, you know, everybody, the Googles out there and we look at them and Netflix as being the gods of cloud native and of cloud and their use cases is at Red Hat and at OpenShift, we get to see lots of other people deploying cloud native stuff. And what kept popping up in my mind while you were talking was the automation piece. And I came at cloud through platform as a service, the thing formerly known as POS, now as, you know, container application platforms or OCP is what the OpenShift container platform. And really the reason that I loved POS when it first came out was it was so automated and it gave you consistent environments in which you were deploying things. And then as, you know, the LXC technology and the container technology morphed into what became Docker, what became Moby, what's now OCI and Creo and Rocket and all this other container stuff, it's just getting more and more ubiquitous, I think, out there in terms of that's how we package up our microservices and build our applications. And for me, that is what brought, to me, it's what brought the joy back to coding, all right? Because what I didn't like about coding, I got out of coding, I'd been as DBA and assisted men then I went into Python and everything. And then I got out of it and became like, you know, do it, did some R and D stuff in product management because I didn't want to be an ops person, right? So this whole DevOps thing, like I was tired of having to manage my entire stack and tired of throwing stuff over the fence and then getting it thrown back. So it was that whole containerization and this whole movement towards automation that really reinspired me and reinvigorated me to basically continue to code and to get back into coding again and to really help push forward sort of this cloud native thing. So the cloud native for me is just a rebranding of this movement towards automation and towards the paralyzed dev that you talk about a little bit with the microservices. I have been at a number of conferences that you've been at and I've heard like probably 20 different definitions of what cloud native is. But to me, it's never been about the scale. It's always been about the automation and the breaking down of the application into more manageable packets of development processes and the rise of things like Jenkins and other automation tools for CI CD that really change the nature of development. So I think the mystery here is not that it's not at scale. It's not just faster, it's the ease. So I don't know how you add that into the definition but it's the ease that this automation brings to the table and as someone who's managed large projects, programming efforts and initiatives, the ability to compartmentalize the different pieces of it so that you can develop in parallel. You can push out patches and you can do these things that before with a monolithic application, just you had to shut off your service and roll the update out. So it's interesting and I think as we hear more use cases I would expect that piece to be teased out too, not just the faster, I mean, it's an aspect of faster. Yeah, yeah, it's in some ways it makes me a little sad because although I really like the speed aspect of it and that's all very good, I was very keen on the green aspects of cloud native. I really thought that that was what was gonna take off because it's cheaper. You can run your systems more efficiently and they'll be much, much greener and they'll be much, much cheaper to operate. But there was a bit on the greener check box but there are some folks up in Iceland who are running a cloud that's all based on, I can't think of their name but it will come to me at some point, steam and hot water from the earth is running all the energy that runs their cloud. So there is a green cloud out there and they're wonderful guys who are doing that. But the green aspect, that brings up another whole topic. I can remember going to somewhere in Nevada to somebody's server farm for a cloud when it really got brought home how not green clouds are like the physicality, the huge air conditioners, the amount of energy that it takes. So it's only greener to me that in the aspect of saving money the color of money in the US, not in Canada because it's color and lots of multicolored here but the green aspect of cloud is really just about the savings thing. And I don't think many corporations unfortunately really are giving paying enough attention to the green aspect of it. So cloud is always a little, I think it's one of those metaphors that we tagged on somewhere in the marketing of these huge server farms not intentionally, but it makes them seem ethereal like the lightweight. Like there is no there to the cloud when you upload your 1,000 pictures to iCloud and the tangibleness needs and the green aspect of it really should be brought home a little bit more. And I wish that it had come up in your survey but would not have expected it to. No, people don't. And in fact, actually the interesting thing. So I've talked quite a lot of conferences in which I mentioned the stats that data centers alone put aside all other energy uses in the tech industry data centers alone use 2% of the world's electricity which makes them about the same amount of energy data center alone is about the same amount of energy as the entire aviation industry. And whenever I say the shocked nobody, nobody in our industry has any idea, but that is the case. That's all good. And I remember, I think it was the super map facility in Nevada that we've been into and it was beautiful. It was like going into a Terminator movie. It was so like blue lights flashing everywhere and you're walking down a corridor with armed guards because you know, whatever. But it was really cool and sort of like when we were talking earlier about getting an MRI it was the technology would be like walking into like an amazing technology place. And but it was the external, they had huge Mack trucks with air conditioners that were pulling up to the sides and that when the air conditions broke they could just ship in another air condition because it's just the heat that they were producing. And I'm not saying there's anything wrong with super map or anything. That's just the myth that we have that the clouds are green or we bought into that they're ethereal, but it's quite interesting. And I also think like for me to bring it back a little bit to the mystery of cloud natives is that the CNCF is doing a great job of promoting a lot of the projects and the toolings and things that we need to make these applications fly in the cloud and to host them. And so it's been quite a very good thing to have them bringing all of them together and trying to get the message out about native stuff. And I'm really looking forward to when is the book that you're doing getting published? That's- Oh, well, it's all written. So I'm hoping that it will go and it's gonna be available for free as a PDF on the Container Solutions website. And then we'll probably have a physical book as well almost certainly for sale on Amazon. But free PDF. Let's get the PDF and read it. It's much greener. It's much greener than printing it. Yeah, I think that's a wonderful thing. And also hopefully it'll be available when we meet again, hopefully at KubeCon in Austin, Texas in December of this year, which is coming sooner than we think. And that was another interesting thing is the CNCF is, I think you were invited to be one of the reviewers for the CFP process for the Berlin event in February. And I just finished doing being a CFP, a call for papers is what CFP stands for, reviewer of the KubeCon entry. And it went from let's say 400, I think in Berlin to just under a thousand submissions. And it was probably the most eye-opening thing I've done in a long time is to get to see all the different topics that people were bringing up that were cloud native. So I'm sure Container Solutions submitted some. There were tons from Red Haters. There were tons from all other, all walks of life. There were so many new projects and so many new tools out there. I'm wondering if there's anything that you're seeing on your radar that has come up recently, especially in the conversations where people are pushing the frontiers of cloud native or there's some other use case that you're curious about. Well, I mean, there's an awful lot to talk about. Security solutions and really that's absolutely. So my old business partner Liz, she now works for Aqua. And there are an awful lot of who do securing containers. And I hear an awful lot of conversations now about how to make containers secure, but also how to make microservice architecture secure because actually you are opening up a lot of surface area that needs to be protected. It's actually really quite difficult to secure a very large scale microservice architecture. Yeah, that's everything in that area is interesting. Project Calico is interesting. The Istio project is interesting. Now, all those new, the only very new project that I heard about, oh, I think they're in stealth mode so I should probably talk about them. But there's another project basically building on the concepts that we were chasing up on microscaling systems, which is about using clever scheduling on top of the orchestrators to use resources more efficiently and cut hosting bills. And I still really like that. I think the problem at the moment is people are more interested in going fast than they are in cutting their hosting costs because developers cost more than machines. I think they hit a certain point and I think that's always an interesting arc as people use the mammoths, AWS, Microsoft, Google, whatever the cloud is that they're using or even the smaller ones. And then they hit a certain point and then they start seeing the larger bills come in and then what I've been seeing is a little bit of a trend towards a more hybrid approach, like bringing some of the things back onto the private cloud. And that's sort of we've been exploring that a lot with OpenShift and OpenStack on-prem with some bursting to cloud for quite a few customers. So that's an interesting aspect of it. The other thing that aside from security that's really been catching my attention is the open tracing, distributed tracing stuff. So like the things our friends that we've scope, we've worked with scope, but also the Yeager project, which as of this recording, I think is now officially an incubation project in CNCF. So you'll see that one, which is based on some of the Zipkin stuff. And it's, but in terms of the surface area and understanding the surface area of your applications, some of the visualization pieces that are being surfaced by tooling like Yeager and others, is really very interesting to me. So I've been kind of watching that space and seeing where that's going. So, but also like the CFP process, there were probably 20 projects in there that I had no idea existed until I read their submissions. And some of them are under the Kubernetes projects, some of them are CNCF, but a whole lack of them are just independent projects, like the one that Liz Rice is working on, CUBE Bench and 20 other security benchmarking things. There's just so much going on and something's new every day, I would think on GitHub out there. So... An area that interests me, that I know I used to rely on, absolutely, that's the problem. I used to do massive amounts of distributed listen stuff in the 90s when things were quite different. But we used to rely really, really heavily on API tracing and API replay, API capture, all of the messages that went across an API, so you capture them all and replay them all in a particular microsecond, well, they weren't similar to microsecond, they were just basically giant model. It's all talking to one another in a distributed fashion. But that was just so incredibly useful. We couldn't have kept these systems going without good API playback. And I don't hear much about it. There's Spectolabs in the UK, but other than that, I don't hear a lot about it, I don't think. I thought I, maybe I'm gonna have to go grab somebody like Alexis at Weave Scope, but I thought I saw some interesting playback stuff. They, yes, they don't... Demo recently, so maybe we're gonna have to get Alexis and some of the folks when we've worked to show us what they're doing, because I know there's a little video floating around the internet, talking about that replay aspect, which would be really cool. So, yeah, it's all, but it's all the tooling around managing that service area, understanding all the relationships between all the microservices and all of the containers out there. And so, you know, going back to the reason I really love Platform as a Service when it first came out was it made the deployment and everything very auto-magic, very simple and doing that. But now we're adding in all of this other complexity too. So my fear is that we need to keep it simple. My fear is that in order to keep that joy of coding and programming and things, the tooling has to come along as well as well as the deployment processes. And it is there to a certain extent. There are lots of beautiful registries with beautiful WordPress containers that I hate, but that are amazing and frozen in time. But, you know, they're out there, but it's still, you know, I still think we're at the beginning of this revolution in a lot of ways, and it's just gonna keep evolving. So, the one thing I always ask people who come on Tech and Talk is, here's my coffee telling me that it's turning itself off in the background, is to tell me who would you recommend we talk to next? Is there someone that you're dying to hear from in some other aspect of cloud native conversations that you'd like to throw under the board and the tires and have me bring on board? Yeah, there is somebody who's very interesting actually. She is the person that I interviewed at FT, and I was really, really impressed with her thinking on real world infrastructure for cloud native infrastructure and architecture. And I've also seen her speak at, I think it was at QCon last year, and she gave a very good talk on monitoring and the dangers of having too much data, too much information coming out of your system, and then getting a little bit of tea. And that is Sarah Wells, she's one of the chief architects at the FT. Absolutely. Cool, that's a great one. There was someone recommended from IT at AM Max recently too, so I'm gonna reach out to them and try and get them on as well. So once again, Anne Curry, thank you very much for coming on today. And we're looking forward, as soon as that PDF is ready and the book is there, let me know, and we will post the link to it with this blog post. I'll go back in and edit it in the future time and add it up there. And I'm looking forward to seeing you again, hopefully at QCon or somewhere before then at one of the upcoming conferences, or again, virtually, if you have another wonderful project to share with us. So thanks again, Anne. Thank you very much for having me. It's been a delightful experience.