 So welcome everyone to the emerging trends in cloud engineering panel that will focus on platformers product, platform APIs, developer self-service and more. We have a fantastic lineup of panelists they genuinely privileged to be amongst these folks, including Alex Jones, Brian Graceley, Katie Gimangy and Victor Flasek. I'll let them introduce themselves in just a second. I'm your host Daniel Bryant. I'm currently director of DevRel and Ambassador Labs, where I focus on helping engineers build platforms and deliver applications on Kubernetes. So could we go around the room please and quick introduction of who you are. Katie, shall I start with you? Sure. Hello everyone. My name is Katie Gimangy and currently I am the ecosystem advocate at CNCF or Cloud Native Computing Foundation. I have been in this role for almost half a year, having my first anniversary there, well half anniversary. And my responsibility within this role is to pretty much lead the end user community. So these are the vendor mutual organizations that use cloud native tooling, but at the same time to bridge the gap between these practitioners and the projects within the ecosystem. And yeah, I'm going to finish off with this one. Perfect. Brian, you want to go next? Sure. Hi everybody. My name is Brian Graceley. I'm senior director of product strategy at Red Hat primarily for the OpenShift platform, so our Kubernetes platform. And on the side I host the Cloudcast podcast. Brilliant. Victor, you're next in my box lineup. Okay. So I work in Shippa. I'm a developer advocate there. I do not yet know what I'm going to do there because I'm about to join tomorrow. So something in Shippa. I have a YouTube channel and I don't know, I published a few books. That's all. Super. Alex, last, my name is Least. Hi, I'm Alex. I'm a principal engineer at Civo. Civo is a cloud computing provider built on K3s. I build hyperconverged infrastructure all the way up to operators, all the way up to tenant application architecture. I also work a little bit with SIG app delivery and observability. Excellent. I appreciate all that. That sounds perfect. So we'll start off by asking the question of what does a modern platform or I often say Paz platform as a service look like? You had the rise of infrastructure's code, containers, Kubernetes, and many of us are hanging back to the days of Heroku, of Cloud Foundry, where it was Git push master and we were good to go, right? What do you all think a modern Paz encompasses? I think we do not have it yet. We will soon. I believe that that's the next step in evolution of Kubernetes. I mean, we know that that modern platform is going to be Kubernetes based. That's certain today. And I think that we are just starting building platforms like that. And when I say platforms like that, I mean Heroku like, right? Because I think that there is some kind of misconception that developers will adopt Kubernetes. I think that that will never happen. It's too complex for vast majority of people. And 2021 will be probably a year where we will see emergence of platforms that layers on top of Kubernetes. Actually, I actually am over the same opinion here. I think there is always like an evolution cycle. We go around when it comes to deploying applications. We had this cycle when it came to the VMs. So pretty much how can we make best use of our compute at the time? We had the configuration as code that was heavily dominated by tools such as Ansible or sold at the time. And I think we kind of undergoing the same patterns, but with different technology. That's where we have Kubernetes coming around. And now, again, making the best usage of our space, but now we're using containers. And they pretty much the Linux primitives, the Linux kernel primitives with C groups and namespaces. We have different ways to describe our configuration from plain YAML. We transition to templating with Helm charts. Now we have CRDs to further customize the experience of our developers. And now I think we are at a moment where we have the ecosystem there. We have the technology, but now we're trying to make the best of it to really enrich the developer experience. I think how can we deploy easier to have that competitive edge towards like a technology or business? It's interesting just to follow from Katie's statement that Kelsey Hightower came out with a really great sort of sentence around that most people want a pass, but the requirement is it has to be built by them, right? So taking all these commodity tools and wrapping it into your own kind of flavor to give you your own pass. And I thought that was kind of a good take on it, right? I would build on top of that. I think that most companies do that. And after a couple of years, they realize that they failed. And then they're looking for a real pass. Yeah, we always kind of come back and forth between this, you know, I think James Governor the other day was joking on Twitter. He said everybody that's building platforms is trying to replicate Heroku. I think it's a little different. I think the spectrum is a little bigger. I think on one end, there is a set of developers or a set of work use cases that are looking for really highly opinionated and even maybe more so than Heroku, like what Netlify does or, you know, Cloud Run or Lambda or something. But then there's the other part of it, which is do we want to sort of proliferate opinionated platforms? And so we see things like, you know, with Kubernetes, we see operators let us run databases and AI and ML workloads. And so that to me is more the question is like, are you looking for highly opinionated sort of one-off platforms? And those will fit certain use cases. Where do you want something, you know, Kubernetes gives us a foundation to do a lot of things? Is that a better way to solve, you know, the breadth of problems that most companies have? I like that, Brian. And that actually leads nicely on. Thank you, everyone. But that leads nicely on to the next question is, we frequently now talk about platform as a product, it should be designed as a product, managed as a product and released and controlled as a product, which I totally buy into my day job. How does the implementation of this differ from the traditional approach as many of you sort of mentioned? I think from my own experience, there's a bias towards reuse and proliferation these days. Platform as a product is essentially a way of mandating and ensuring your survival within the ecosystem and that you are wrapping up all of the domain specific knowledge and the components that you need to be successful. And, you know, I think of some of these, you know, aforementioned products that are really popular in the ecosystem and Heroku really being kind of the nostalgic gold standard. But if you were to rebuild Heroku in the Cates based environment, it would look very, very different. And I think that it's really imperative that we understand that there is no panacea currently. And really, when we think about platform as a product currently, it's something that is extensible, right? So that there can be community reuse as well as building on top of. And we've seen many of those already today. The place where I think that many or one of the main reasons why many platforms failed in the past is I believe that they all tended to go to extremes, either be extremely opinionated and satisfy, let's say, developers and completely leave operators and CIS admins and SREs, you know, people who are making sure that systems are running properly unsatisfied or like Kubernetes is today that he satisfies one group and it doesn't satisfy the other, right? So it usually goes into extreme, extremely opinionated, doesn't work for CIS admins, works for CIS admins, developers cannot use it. And that's why I believe that we are now in a good spot that we have that we have Kubernetes, it does satisfy CIS admins, it does what it needs to do. And what we are missing is that layer on top that that's why I believe Heroku in a way failed or Docker's form failed as well. They started with from that other direction. Let's start with making it easy for people instead of making actually making it have all the all the levers that it needs to have before we build that that structure layer that that will satisfy the other group. And nobody did that in the past really or at least very few. Yeah, I think I think it's interesting as we get into thinking about it as a product that we've we've sort of moved past the idea of like, is the decision sort of by versus build something yourself, right, take a lot of the parts together. And now it's it's more a matter of like, if I have a platform, whether it's you consume it as a managed service or you consume it as software, it's does does that the way that that thing is delivered as a platform meet your needs. So like as an example, so there was a thread on on I think hacker news yesterday or the day before that was, you know, Kubernetes is really complicated because it comes out so frequently. How do I keep up with updates? And one side of the argument was, well, who cares, just spin up another cluster and move your stateless applications over there. And when you put that sort of response in front of anybody who does anything stateless, you know, it's somewhere between, you know, or, you know, kind of face palm. And I think we need to realize like delivering the platform is really important delivering updates and making it simple for people to consume. But but there's even parts of that that that are hard for people, right, like the three or four months cycle of Kubernetes is really hard for an organization that has an application they don't touch nearly as quickly, right? There is an aspect of that that's, you know, lift and shift or modernize. So again, I think there's there's sort of these these shades of gray in between stuff that if you make it all one kind of definition, it makes it hard to like, like Victor mentioned, you know, it doesn't apply to one side, but not the other necessarily. I think on that point, I'll skip a few of the questions I've initially talked about and look at the open application model, the OEM spec. One thing I've just heard you all talk about and one thing that really stood out to me in the OEM spec is this clear definition of personas. So it's gone through a few iterations, but now it's focused on components owners, which for me are kind of the developers, application operators, kind of SRE like, I think, so I rely on the engineering like and infrastructure operators, which for me are the kind of classic platform folks, the Sabin folks. Do you know, I do like the three distinct personas rather than the classic perhaps Dev and ops thing. Do you all see that in your day jobs? Do you see perhaps the three personas there? Do you like that part of the OEM spec? I'd love to get your thoughts on that. So I can speak a little from a financial perspective where there's a highly regulated set of protocols to which you have to follow, and it very much is reflected in there being a clear distinction between operational staff who can touch clusters, who can deal with infrastructure and debug, then the SRE team who may well be observing telemetry and making iterations on application code, and then you've got their counterparts with the development cycle team. And if you were to draw sort of a Venn diagram, these all touch slightly, you know, between the three groups. And so it is nice seeing a recognition there. And the thing that I really like about the spec is that it looks to sort of unite the users and the platform builders, right, within a single specification. So it's very much as reflective of what we're seeing in the industry. I can actually bring an example from my post roles as well. And I think this kind of segregation between different roles and different personas that happen organically for us, which was quite likely. But initially started as one team which provisioned infrastructure as a service. And that's if we're looking at the team topologies, that's going to be a type three. And at that point, we would just pretty much build for the demand we have from our developers straight away. However, there is not too much of upskilling going on. There is not too much of support that we could provide continuously. And that kind of transition into a team actually our team divided where we have our platform team, which just focus on the infrastructure provisioning. And then we had our cloud management team, which would focus on that ops kind of culture. And towards the end, we moved towards the SRE model, where we have an SRE team included as well. And then we have this kind of collaboration when it comes to, we don't just provision platform, we upscale, but at the same time, we can push back on future development. And based on the SLOs, we can bring things that would really empower our teams forever. Now, what I'm saying that is that all of these kind of free teams that we have, it was an organic kind of development of all of these personas that we see enforced by the open application model as well. And at the time, we had to provision all of these components, we had to create our own methodologies and pretty much build the tooling or bring in the tooling in-house, which will allow us to deliver this particular or to deploy our application using these models. Now with this back, it's already there. So it's kind of nice to see this further confirmed within the community. But at the same time, it brings a benchmark and a standard, which means that now we have the fundamentals, we just need to build on top of it. And this truly enables more extensibility and pretty much is a good model that every team can build on top of. Yeah. I think what Katie highlights in reality is that middle tier, that middle identity, is the one that's going to end up being the most fluid. At any given time, they may be more infrastructure oriented, especially if let's say you're moving an application from on-premises into the cloud, for example, or even cloud to cloud. And then at other times, they're going to have to be sort of more the application-centered SRE. So, but I think the thing I really love about it is, it is very distinct to say, it's not sort of one way or the other, there's some flexibility in there. And I think that's, but like you said, the fact that we're sort of writing it down, we're sort of making it a structured model that allows you to think within a certain framework and it's not just completely nebulous. What I believe is kind of missing from that description is describing who are users of each of those groups, which is often overlooked. They tend to be in a style or right, I'm in charge of infrastructure, I'm in charge of this or that. Well, I believe that the real move forward is that to acknowledge that SRE is application operators, infrastructure operators, their users are developers. They're supposed to build services that can be consumed by developers. They're not supposed to actually really create infrastructure. They're not supposed to deploy any applications. They're supposed to create tools, patterns, means for developers to be self-sufficient. I don't think, I don't see it as much right now in the industry, but I believe that's the direction we need to go. Very interesting, very interesting. So I jumped ahead and I probably should have introduced the OAM spec, but darling back a second, and I think if you've touched on these ideas as well, how important do you believe open standards are to creating a platform? I can definitely answer this one. It's mainly because I've been focused on this particular topic for a while now, and I have been trying to really pinpoint what was the impact of the emergence of interfaces within the Kubernetes ecosystem, but now I'm focused on the open standards when it comes not just to the application delivery, but to observability style because we have projects such as open matrix and open tracing, which really tries to bring that, again, fundamentals to how we exactly would collect metrics or how we deploy our application. And there are three main things that I usually identify when it comes to open standards or pretty much the fundamentals. They impact the vendors, the end users, and of course the community. For the vendors, it's going to be pretty much innovation because as a vendor, you don't have to concern yourself how can you merge your components with the platform. The standards are already going to be there. So as a vendor, you can focus on how to deliver value to the customers with minimal latency. So you really focus on your customers and what you can bring to them straightaway. For the end users on the other side, when you're talking about open standards, it means extensibility. Now, what it actually means here is that as an end user, you can choose multiple tools with the same, well, kind of trying to solve the same problem, but you can benchmark between them. And it's easier now because you have that one platform with the standards integrated or if the interfaces were available, you just inter-switch in between them. And of course, when you're looking into the entire ecosystem, that pretty much translates into interoperability, which means that we have an ecosystem which is quite colorful in terms of the technologies out there. But more importantly, again, we have multiple solutions for the same problem. And this is really kind of the driving force of pretty much the extensibility, kind of the growth, organic growth within the cloud native ecosystem we have now. Just to add to Katie's points there, around open tracing and open sensors, merging into open telemetry for end users like myself, it means that we can use the hotel collector and have a guarantee that there is a sort of a longevity to this project and that vendors are coming together to design a solution that will benefit the users and also allow us a degree of choice in that if vendors are compliant and they conform to these protocols and these specifications, then we know that we can have this path to migration, whatever should we choose to do in terms of our business direction. So it's wonderful to see that kind of collaboration and that's only made possible through these kind of standards that are being built out. Yeah, I think both of you guys, both of you point out something that's sort of indirectly important. We used to do standards, we'd have these I think standards bodies, the IEEE or WC3 or whatever. The nice thing about them being community based now and especially open source based is you not only get standards that come with code, but to a certain extent sort of the economic viability of those standards work themselves out. So like open telemetry is a great example, but there's been others and for people that are customers or users of it, they want to know that as much as they want to know that it's a standard, right? They're betting on a technology, they want to know is it going to be around for a while? Is there going to be people that support this? I don't want it to be fragmented because it's not good for Alex's business model, it's not good for anybody else's business model. So I think we've reached a point now where it's evolved, where it's not only the standard, which is great, but we get code, but there's also a certain amount of sort of economic Darwinism, if you will. So you know that the thing that you ultimately pick is going to be more viable than just paper specs like we used to have back in the day. Very nice. I've been from a Java background, I can definitely recommend the or definitely recognize the paper specs, HABs and so forth. So yeah, very good, very good. I think it's a sort of wrapping up question now. We've covered sort of some of the APIs and the benefits of specs. I'd love to get your thoughts on whether Kubernetes is somewhat becoming a centralized control plane. These days, it's almost a universal control plane, I guess, the rise of customer resources, operators, I think everybody mentioned, Brian, what do you think in relation to that? Do you think Kubernetes is becoming a universal control plane for modern platforms? You know, for some, yeah, having done this for a while with Kubernetes, I think it is, I think it's a, I think of it as more of sort of a foundation for what's going to build on those control planes. So whether that control plane is at the service mesh level, whether it's sort of this cross networking thing like Submariner and other projects, I think it's a really, really good foundation. And what's going to get interesting with it is, you know, does the control plane, is it multi-cloud? Is it sort of cloud to edge? That's where I think we've got a lot of flexibility, but the nice piece is we're not, we're not kind of shifting the underlying sand. So I feel really good about that. And I think, you know, like we're coming up into KubeCon, but KubeCon, we used to call it KubeCon because that was the dominant technology. Now, Kube is sort of the, the sort of safe and boring piece of the pieces and, you know, it's control plane con and what will that evolve to, which is really exciting. I can only echo that to be honest. I think the most powerful characteristic when it comes to Kubernetes is the fact that it's not opinionated. It has some assertiveness when it comes, for example, for networking model, like every single post you have an IP. However, it is not assertive when it comes to the underlying technology that Kubernetes run on top of. And this has been quite powerful because you have a methodology that allows you to lift and shift your application pretty much anywhere. And over the time, there has been this build integration where you can deploy this easily on the edge nowadays, actually now it's on the edge, but it came with the public cloud on-premise and more prominently now towards the edge, as I mentioned. And this, again, has been very, very, maybe empowered by the fact that Kubernetes allows a very good set of primitives. And based on top of that, you can have these building blocks principles where you have already components that are working that are stable and you can build on top of them. And I think it has been mentioned many times that I think Kubernetes, again, it's going to become the basic, it's going to be the boring as I mentioned. And this, again, has been seen by the fact that the Kubernetes source code has been changed for our time because at the beginning it would have had everything integrated within it. So for example, the runtime component would be very deeply integrated there, same with the storage, but now these are components which develop completely independently. They have their own landscape and they own pretty much vendors and community around it. So just based on that, we see that Kubernetes is getting slimmer, like the binaries for the Kubernetes. And I think this is the way to go forward. Becoming slimmer, it means it's out there, it's stable, it's reliable, and people can use it for anything that they can build on top of. And I think there is, I think Kiel said, I thought I mentioned this. Kubernetes is a platform to build other platforms. So I think this kind of encapsulates that quite nicely. And this also reminds me, Katie, of one of your keynotes around the interoperability of the components of Kubernetes and around the CNI, for example, where it's essentially like a train station where people can come and build out their ideas and figure out where they want to go from it. And I think that's the beauty of it, is that now we see, especially with the latest generation of operator mechanisms being brought into that ecosystem, people are doing all sorts of provisioning. And so yeah, it's a great meeting place to start building out that platform, as you said. Any other ideas for anyone there? Victor, you've been required on that one? Oh, that one. No, I'm quite mostly because I completely agree. I don't usually, people associate Kubernetes with the machine that runs containers. To me, that's definitely not the main benefit. It's all about its API. It's about the scheduler. And we see, I mean, we can see already that in action. People are running Mac farms based on Kubernetes API, right? Or control plane. We've seen with cross-plane managing your infrastructure with, again, with cross-plane, with API and so on and so forth. I mean, we've been scheduling VMs as if they're containers as well, right? So I would even make a prediction that Kubernetes control plane will outlive containers even, right? That's the real power of Kubernetes is in a control plane, not in the fact that it can run containers in your cluster. Great insight there. Great insight. So it definitely works in the show notes. The reference links to the OAM and cross-plane there. And there's KubeVail, which is a reference implementation of KubeVail. I'm not sure how we pronounce that one yet. But yeah, I'm definitely with you, Victor. I believe that the API is probably the good API often outlives the implementation, right? As a final outro question, we're going dangerously close to time now, I'd like to get like the tweet size version, right? So two eddy characters of where you think the most innovation in software delivery is going to happen in the next five years. Is it languages that we saw Dark Lang, things like that? Is it architectures, dreaded nano services, or is it platforms, things like serverless platforms built on top of, say, and other abstractions in terms of using the control plane, the Kubernetes control plane? Is it, yeah, perhaps platforms that are going to be the most exciting. So languages, architectures, or platforms? Where's the most innovation you're going to come from? Katie, I'm going to pick you. I was actually hoping you would, because I think like all of these areas are going to be quite in development within the next years. But I'm actually quite biased in this perspective. Because I've been following the platform space for a while now, I'll be following it quite closely. I think here is where there's a lot of dynamics. But again, I'm seeing this from my own kind of bubble of technologists around this area. So I think when it comes to the platforms, I think there's definitely an improvement when it comes to how we use the platforms, how we deliver that developer excellence, operational excellence, simplifying it. We still are like hanging around YAML manifests, which we shouldn't, and we even try to push this to our developers. It's still the case. I think moving away from that is definitely going to be a step forward. I'm personally quite excited to see how we can use some of these platforms to deliver these applications to the edge. This is something, again, which I'm following quite closely. But I would like to see how this can be achieved seamlessly, but at the same time to ensure that kind of obstructed propagation of applications to areas closer to the users pretty much at the edge. So yeah, from my perspective, is going to be the platforms. But again, my view is biased here. Brian, can I ask you? Yeah, I think I'm going to place my vote in the architecture space. I think the biggest changes I've seen in the Kubernetes space or even just the cloud native community of the last five years have all been people pushing the boundaries on what you can run on these environments, what you can distribute, how big or how small. And I think we're going to keep seeing that. So whether it's architectures to make apps simpler or architectures to sort of deliver applications across multiple environments, I think we're going to keep seeing that expand quickly. Super. Victor, you're next in the box. That's all right. I would go with platforms and languages over architecture. I somehow feel that architecture always follows those two. If you look at, let's say, microservices, nobody really, I mean, very few did them until we got containers that enabled it. So platforms and languages are usually enablers for different architectures. So I mean, we will definitely see improvements in all three of those. But if I would need to place them in order, I would say that architecture follows platforms. A bit of disagreement. Fascinating. I like that. This is good. It's been just a nice panel. We need some disagreement. Alex, I'd love to get your thoughts on this. I think that it is a quiet rebellion happening in the platform space where the price point of commodity infrastructure, you know, Keemew and KVM virtualization mixed with things like Kubevert with Kubernetes and other platforms means that people can now assemble their own cloud providers, combined with the fact that developer experience is, you know, really a hunger for offering. We're going to see in the next five years that there are competitors to Google and to AWS. And we're going to see that we're now starting to get developer centric platforms being built with Kate's just as a pure API and no more than that. So I think that it's going to be the community and the engineers that are guiding the next generation of innovation here. And that is a fantastic way to end the panel. I think thank you to all of you. It's been super interesting. We'll do some Q&A, hopefully in the live event as well. But thank you very much for your time. Thank you for coming guys. Thank you so much. Thank you for having us.