 focused on hardening the perimeter, hardening the edge of our networks. But as we all know from Star Wars, all it takes is that kind of Wamp Rat sized hole and you've got a couple of torpedoes in your network and it's game over. So I'm not saying hardening the edge is a bad thing. Definitely it's all part of a defense in depth strategy, but you've got to think about the edges as well, the internals of the system. And that's what we're gonna be focusing on today. One of the key thing we want you to take away is that the UX of security matters a lot. Yeah, we as engineers, we're inherently a bit lazy. If something confuses us, we kind of sneak around it. Yeah, and I think this analogy with the door code, I've seen way too many times with security groups, with ACLs or the bunch of other stuff. So security is super, super important. You've got to think about it throughout the stack and we've got to make the UX good. And that's basically, if you're only gonna take away a few things from the talk today, that's what Nick and I would really like you to take away. But security is everyone's responsibility and that doesn't matter whether you're dev, ops, QA, architect, PM, everyone is involved in this kind of thing. A lot of us are sort of modernizing our stacks as well. If you're into obviously CNCF tech, you're looking at things like Kubernetes, like Envoy. And as we're making these networks environments more heterogeneous, it can lead to new challenges around security. And we think defense in depth is vital. Edge and security communication is one part of the picture in an hour. We're only gonna focus on this very small part. We're gonna do a couple of reminders around defense in depth in general throughout your application development lifecycle. But it's really important to have multiple layers of defense. When it comes to networking, Nick and I bounce this around quite a bit. We were at KubeCon in Barcelona earlier in the year. We had some really interesting chats with folks after our talk there. And kind of riffing off the slogan from the London Tube, our London Subway, mind the gaps is really important. It's very easy to have accidental encryption gaps or accidental kind of permissions and so forth within your system. So we're gonna focus on making sure you're secure from end to end. And I've said it already, but I'll say it again multiple times, all security must have good user experience or good developer experience because if your whole team is not capable of using the system easily, they will skirt around it. Yeah, for better or worse, people will work around it. So, I think I skipped a slide there. This is me on the right, Daniel Bright. I'm Product Architect Data Wire. We're behind the open source tools like Ambassador and TelePresence. We specialize a lot in sort of Kubernetes and cloud-native workflow. And my colleague today will be Nick from HashiCorp. I'll let you introduce yourself there, Nick. What's that to? Hi, so I'm Nick Jackson. I'm a developer advocate at HashiCorp. And Daniel and I used to work together, which is how we kind of first met working on solutions very, very much similar to this. So, I have had a real job as well. I mean. If I skip on. But security is everyone's responsibility. And I'll hand it to Nick now. As we were putting this presentation together, there was a bunch of facts, Nick, that we saw, that we were like, whoa, weren't we? Yeah, and it's kind of interesting, I think, because you kind of know that the problem's big because it's pretty widespread in the news. But when you actually look at the numbers, so like 214, oh, 214. So what? 214 record what's containing personal data exploited every second. That's every second. That's a huge, huge, huge amount. And it doesn't just stop there because you kind of look at the cost, right? So the average cost of a breach of personal information can be $3.8 million. Now, the kind of the thing about that is that that doesn't account for mega breaches. So mega breach, 50 million records, $350 million, it can cost a company. And you say, well, where do these numbers come from? Well, the numbers come from accredited sources, but look at this. So here's the evidential proof. So British Airways had a very unfortunate leak a couple of years ago. And under European Relegislation GDPR, which protects the personal identities of European citizens, which doesn't just affect European companies, it affects any company globally which holds data on European citizens. But they were fined 1.8, sorry, 183 million pounds or $229 million by the European Union, which is absolutely staggering. And that's not even accounting for their costs of themselves putting the problem right. And then look at the Equifax. So Equifax probably affected, I think the numbers were something staggering like 40% of individuals living in the United States, which is just crazy. But again, like Equifax had been fined $700 million. So these are not just made up numbers, these are sort of real numbers. We did actually take them from very credible sources as well. So 72% increase, right? It's not going away. Like the, I think Equifax and British Airways were back in 2017. And since then, there's been a 72% increase in reported incidents. The numbers, which I really, really encourage everybody to kind of dig into and just have a read because they're super interesting. The Jamalto Breach Level Index, great report. You can download that free from breachlevelindex.com. And that will kind of go into some of the sort of the organizations and some of the different methods which are being sort of used to acquire this data. But a really interesting study which was produced by IBM. So IBM's cost of a data breach. And in this paper, what they've done is they've broken down sort of all of the different constituent parts such as internal investigation, external fines, loss of business. And they've kind of broken down what it actually costs an organization. So huge, huge numbers, huge, huge and a bit of money. And we're gonna help you solve that problem with open source, which is cool, right? Awesome. So we mentioned at the start about app modernization. It's a sort of popular topic. Anyone doing a digital transformation, this kind of thing, you've probably heard of the buzzword. What often it's about is embracing cloud native technologies that we all know and love, things like Kubernetes here. And you've got your users, they're obviously consuming applications and you're trying to say move things to Kubernetes, containerized workloads. But let's be honest, you know, as much as we all know and love Kubernetes, we can't do a big bang overnight from the existing tech. We jokingly like to say heritage or legacy or didn't pick your name, but it's the money making apps. Yeah, it's the apps that have allowed businesses to get where they are. You know, you've kind of built firm foundations using existing technology, racking, stacking, bare metal, this kind of stuff, but you wanna embrace kind of both these things. You wanna move to say or stand up new things on Kubernetes, things on cloud, but keep the existing tech working nicely as well. You probably also, if you've been around for say 10 or 15 years, you've probably got mainframes in the mix as well. Maybe you've got some kind of REST API on the top now, some kind of way of consuming from there, consuming that data. You probably also got some like random, you know, computer under like Bob or Jane's desk that does payroll. We might have networked that in. And as much as we're trying to bring all these things together, a lot of these stacks, a lot of the technologies are quite heterogeneous. And you know, I'm talking from an infra level, from an OS level, from an app level, and putting these things all together is hard enough, let alone thinking about security. We're gonna look now a little bit at defense in depth. So I mentioned up front, you know, defense in depth is vital. And Nick and myself, we really recommend these three books. If you're getting into security in general, and it can be quite a journey being honest, it was for me and I'm not claiming I'm an expert, but I've really learned a lot from Adam Shostak in particular there. Great books on threat modeling, great online videos. And the other two books there is the address networks and agile application security. I've really helped me understand the challenges and the sort of depth of security that I need to think about when I'm working with teams. I wrote a shameless plug, I wrote a book actually last year. If you are looking in the Java stack, like myself and Abraham put the book together, had a whole chapter just on thinking about security, things like that, if you're trying to build a pipeline in the Java world. But enough shameless plugs for the moment. These are my three main books that we recommend. But you need to think about things like hardening and scanning infrastructure. You need to think about things like scanning your code, scanning any libraries you bring in, dependencies, SDKs, and you need to think about scanning the packages, be it VMs, be it Debs, be it containers, whatever you're packaging and deploying, need to scan that now because there's typically an OS with a bunch of things within attack surface that we maybe didn't have to deal with as developers in the past. We hand that off to ops. Now it's much more visible in the kind of threat landscape. We also have to think about things like encrypting data at rest. Should be a no-brainer. Most of the cloud services these days, simple kind of flagging the console to encrypt data and you can choose if you wanna manage your keys and tip to hash it or vault for key management and so forth. There's plenty of solutions out that's helped you do that kind of thing. We're mainly focused today on encrypting data in transit. So if I'm literally end user all day through to service and also thinking about the principle of lease privilege. So not only in terms of what people can do, both from users say and internal ops users, but also in regard to services. Can the web talk to the backend and the database or can the web tier only talk to the backend? Does it need to access the database, for example? We'll definitely focus on these things using a couple of open source texts and Nick will run you through a comprehensive demo at the end to hopefully make some of the things we're talking about over the next, say 20 minutes, a bit more concrete. That's the idea. But if you're thinking about exploring end-to-end comms using some kind of system where you've got, say, Kubernetes and VMs in the mix here on the screen. Many of us, and Nick mentioned actually, Nick and I worked on High Street in the UK together about four or five years ago now. I think it was Nick or Nick. And we used console at the time. We had some VMs. We were using Mesos actually back then for Kubernetes was kind of MGA. And we were using console as a service discovery mechanism. Not only for, say, Ingress. So if you're making requests against an API gateway, but you can use console to direct the requests to the relevant services, but also internally. If a microservices needing to talk to other microservices to do its work, you can use console to kind of distribute a key value store to understand from a service discovery point of view where you need to route the traffic to. Now, the reason we're talking about ambassador and console is they're both using Envoy, Envoy Proxy, which is a CNCF technology. And so we'll break that down a bit more later on. But there's many other solutions in this space that do exist in plenty of other great open source things that you can go and have a look at as well. But I mainly work with ambassador with Dadawa. Nick works a lot with console, obviously. So we're most familiar with these. But if you think about a user making requests to an API gateway, if that API gateway can link up to the service discovery mechanism, then it can forward on traffic as appropriately. So we're mapping, say, a prefix like slash shop, and then the gateway can forward on that request to the shop front service. And that service may then need to reach out to other services, say in a Kubernetes cluster, or may need to reach out to something that's more in the heritage stack. So it might be going across networks, maybe a flat network, maybe a bit more complicated than that, different VPCs, that kind of thing. But that's fundamentally what we're gonna be focusing on today, this journey from a user making a request to the actual request being fulfilled by talking with multiple services in a kind of microservicey landscape. If we focus a little bit on the API gateway now, so many names in this space, edge proxy ingress, for things like application delivery controller, they sort of encompass more or less things, depending on the terminology used, but primarily it's about exposing internal services to end users, sometimes via multiple domains. So, you know, dub dub dub Google, dub dub dub Google internal or something like that, for example. The user shouldn't care or shouldn't even know what is serving the traffic on the backend. So the edge gateway, the API gateway should really hide the fact that you're running stuff on Kubernetes or VMs or bare metal or whatever. We need to know as engineers, but definitely our customers don't. From a security perspective, things like TLS termination is really important at the edge. Things like enforcing minimum TLS versions. A common attack is to try and downgrade a protocol at the edge. So, you know, you're going back to, say, a TLS 1.0 where there's no issues, you can kind of try and attack from that perspective. You also want to do things like user authentication, authorization. You can use things like, say, IDPs, like Key Cloak, Optica, you know, GitHub, social logins, these kind of things. And you can often, if you're using, say, things that OAuth, you can get scopes and roles and you can add these to tokens that you can then pass down through your stack. Obviously, the applications do need to be aware of the tokens they're passing down, but you want to centralize your authentication at the edge. You don't want to be doing it multiple times. And ideally, you want to really have just one or a very small number of authentication solutions. I know many began to work with don't, they have multiple sort of authentication solutions, but you want to try and consolidate them for ease of management if you can. Something that often gets forgotten about, and we see quite a bit in working with Delaware folks, is that you need to do things like rate limiting, because this is very easy to be attacked in ways that are not always intuitive. So doing things like denial of services or trying to fuzz APIs and trying to brute force the authentication is really quite common these things. So you definitely want to be thinking about this kind of thing at the edge as part of your entire solution. It's not just authentication and authorization, it's securing the transit and TLS and it's using things like rate limiting and timeouts and things like that. Just using ambassador as an example, it's Kubernetes native open source framework and the kind of way we configure it is using CRDs. Again, many other frameworks exist and the CRDs are often certainly different. We've used annotations in the past to configure ambassador, but you can kind of get a flavor for how you configure your edge gateway. Ideally, you want this to be loosely coupled config. You want individual service teams being able to define a mapping for their service and exposing how end users consume it. Maybe your one team is routing via console, one team is routing via the Kubernetes API and maybe there's different timeouts required. You want to decouple this mapping of sort of endpoint API, web prefix to the actual service. You obviously want to think about probably more from an operational standpoint centralizing some of the config though. So we have like modules and ambassador, other very similar things and other tools where you define say TLS and apply this globally. So we're saying in this config we're redirecting clear text from 8080 to make sure everything's going over TLS. You might also set minimum TLS versions here, this kind of thing. A TLS is really important to set up at the edge even if you're using CDNs, we'll cover that in a minute, it's really important to make sure your edge is secured well. This is a config I've already sort of walked through there. One thing I would say and I've chat with some folks at HashiComp recently about this is friends don't let friends manually issue TLS certificates. There's fantastic websites and tools out there these days like Let's Encrypt, you can get them TLS certificates from them, hat tip to the Jetstack folks, a fellow UK company, done fantastic work with SERT Manager for integrating renewal of TLS certificates with Let's Encrypt via Kubernetes. We use it a whole bunch and there's other solutions of course do exist but massive fan of SERT Manager. So please don't be hand rolling your TLS certificates, it's very easy to let them expire and then you've got all bunch of, all manner of problems on that note. I did mention CDNs and Nick and I have chat about this quite a bit and Nick schooled me on some of this stuff with his use of Cloudflare. Again, many of the CDNs do exist, they are a very useful tool in your arsenal for the things like DDoS protection, Cloudflare, Akamai, these things have WAFs, have web application files built in and the whole point of a content delivery network is they can cache traffic and they can have points of presence close to where your users are. So there's many benefits of using CDNs but do take time to learn about the security config in your CDNs and Nick's gonna break down a bit more later on about say using origin certificates. So encrypting traffic between the CDN and ambassador in our case, the Edge gateway. You wanna be forcing HTTPS at the Edge, you can use things like HSTS just below which is a new strict transport security protocol which is interesting and worth checking out. You can even do kind of MTLS style authentication between CDNs in the Edge gateway and again you can enforce minimum versions on the CDN as well as the actual origin as well. These are things that are totally worth thinking about because a bunch into this paper a while ago, definitely worth a read if you're using CDNs. Some folks do sometimes treat CDNs as a kind of magic security blanket that they throw over their application thinking it's gonna completely make it rock solid but as this kind of academic research shows it's very easy to have a badly configured origin which people can find and attack and then they completely bypass the CDN and the WAF functionality. So this stuff is, as we keep saying it's really important to think about the end-to-end and mine the gaps. Moving on a little bit more towards the kind of service mesh space now. Service mesh is very popular, very sort of buzzword. Istio has driven demand there. Fantastic work by Linkerdee, fantastic work by the console team. Like this is a really active area of interest we find and the three pillars as Nick's talked about before with service mesh is observability, reliability and security. And Nick and I in that project we worked on a few years ago now, we wanted to do TLS between services, we were doing microservices, we wanted to have really good observability but we found it really hard with a mixed text back. We had Java, we had Roombie, we had Go and we were doing SDKs that were language specific. Pulling some of this stuff out into what's now called a service mesh makes it much easier to do. And really a service mesh is fundamentally about exposing internal services to internal consumers. It's not really focused on the end users per se, it's more internal traffic within your cluster, within your data center, within your network. And you can even segment the networks up as Nick will describe in a bit more detail. Kind of as I mentioned with the API gateways, a service mesh encapsulates the infrastructure within the DC, within your Kubernetes cluster, within your VMs, these kind of things. You shouldn't really know or care where you're routing to. The service mesh should do service discovery and point you in the right direction. You need to care from a mechanical sympathy point of view, understanding kind of from a tech level where you're routing to, but you ultimately wanna defer a lot of this to the service mesh itself. And the service mesh can handle things then like service identity. If there's a proxy, a sidecar process sitting very close to your application and if you can guarantee secure transport, say going over local host between your app and the sidecar, then the sidecar can do a lot of functionality in terms of generating identity, upgrading protocols, doing observability, MTLS, and your application doesn't have to know about it. So it can be a Java app, it can be a Ruby app, it can be a Go app, providing you've got like a sidecar that is plugged into the rest of the mesh. The sidecar takes on the responsibility of doing things like service identity. Awesome stuff like access control lists. Consoles got a concept put intentions with Nickel demo. Really, I found this useful. You can segment the network and define what can talk to what. At a service level, not an IP report level at a service level. So it's much more intuitive to reason about and easier for me as a developer to understand I'm allowing web to access middle tier, middle tier to access data store, for example, rather than smart cane IPs and all these kind of things. You can also do things like enforce metadata. So make sure that what you're passing down through the stack is correct. Envoy has a lot of capabilities in this space. I know Consoles was adding more and more support for this but Envoy has some real rich functionality for enforcing kind of what we're passing down through the microservice call chain. I mentioned Envoy several times. Envoy is an amazing piece of kit. There are many other proxies out there. So, you know, showing up to Edge Next and HAProxy, for example, why I like Envoy so much is it was built of the cloud native era. Matt Klein, the whole Lyft team and everyone supporting from Google and IBM have done fantastic work on this proxy. And to be honest, Ambassador is literally a control plane for Envoy. It's a specialized, you know, Edge kind of control plane but we provide simple instructions that compile down into Envoy config. Envoy config is easy to understand but it's very cumbersome to manually generate. That's why we created Ambassador as an open source project. Consoles now leveraging the same thing. With Console Connect, you can run Envoy sidecasts on your services and Console will manage things like the TLS certificate issuing. So you can have identities associated with each service and it will manage a whole bunch of other things like service discovery for you and you can collect telemetry, which Nick will break down more and more. So Envoy, massive hat tip to the whole Envoy community. They are a fantastic community and I really think that it's valuable to have throughout your stack at the Edge and sort of down through the, in the services as well. Little bit of Console config. I'm sure Nick will be breaking this down a bit more but if you're using Kubernetes and you're setting up, say, Console with Helm, which is often what I do. Real nice Helm jar on Console-Helm on the Haskell website or Haskell GitHub I should say. It's kind of like a three liner to using a mutating webhook to inject a configured Envoy proxy as a sidecar to your app and then you can do some cool things which Nick will talk about with the protocols and specifying your upstreams and metric collection. Like it's literally for me as an engineer, me as a developer, it's this simple to put my service mesh with my existing apps. I'm deploying one to Kubernetes. We'll talk a little bit more in depth now about the network segmentation and about minding the gap. So there's a bunch of gaps as we look through this kind of stack because you've got to think from the end user to the CDN, from the CDN to the edge, from the edge across to the first service, so edge gateway across into the mesh and then to other services in the network and maybe other services across the network as well. And Nick's going to talk about the identity and network segmentation at challenges you have there. So the kind of the benefits that we get out of highly distributed architecture. I mean, we get the availability, we get redundancy. We get a lot of wonderful things by using sort of modern schedules like Kubernetes, but one of the things that kind of causes us problems is that we need to think about the way that we think about security and network security because many of the kind of the traditional ways that we'd go about managing this don't apply anymore. So what do I mean? So the first thing is that kind of a lot of trust used to be placed in the perimeter. So in external firewalls, in external routing and things like that. But one of the sort of the common factors in pretty much all of the recent attacks, certainly all of the big ones is that the attackers actually come from within and it's come from within because there's been, let's say a vulnerability in an application framework and that allowed an attacker to access the network, bypass the perimeter, execute some remote code and then move laterally throughout the network and that lateral movement has been the thing which has done the damage, not the initial attack. So we need to think about this. We need to kind of think about how we isolate our networks. So how do we stop somebody from going between services? How do we stop them from moving laterally? Well, we need to use internal network isolation and again, this is not a new concept. Network segmentation has been around pretty much since that. So it's like 30, 40 year old concepts, but the kind of the concept behind network segmentation is that you break up your network into areas of high and low risk or into different areas of risk. And once you've got those different areas built, you strictly control the traffic which is allowed to flow between them. So you're in effect partitioning your network. So for example, here I've got my front-end services which are pretty much public and then I've got my back-end services. This might have my financial data, my personally identifiable data. What I want to be able to do is to strictly control the traffic that can flow into the back-end because based on our remote code execution vulnerability, that will happen in the front-end segment. But we're not dealing with just virtual machines anymore. We are running multi-kind of tenant nodes. We're running multiple pods, multiple containers on the same node. So we need to start thinking about how do we do service-level segmentation? And yeah, you can go so far with IP tables and IP sec and things like this, but as Daniel kind of mentioned earlier, really when you're sort of dealing with the kind of security, you want a good UX. You want it to be easy. It has to be actionable. Otherwise, the chances are you just sort of don't do it. And the service mesh really tries to kind of solve this problem. So those dynamic environments, now we kind of define what like network and service segmentation is. Like why does it not necessarily apply in dynamic environments? And it mainly doesn't apply because it's really, really complicated because network locations are not known. Even within your sort of Kubernetes cluster, you've got netting going on. You don't necessarily know which node a pod is running on. You don't know what the IP address of the pod is or what the IP address of the node is. The nodes are changing all of the time because you're running them in an auto-scale group. Kubernetes is horizontally auto-scaling things. Everything is changing. So the traditional approach where you may say, hey, I'm gonna go for sort of clearly defined routing rules and firewalls, you just can't do it anymore because you don't know what the locations are. So you need to rethink that concept. And you need to say, well, network location is not the thing that I wanna deal with anymore. What I want to deal with is network identity. Service identity should be the thing that I'm concerned with, not location. And again, this is kind of the premises of which sort of service mesh security is built. It's all about network identity and force through MTLS certificates. Yes, if I wrap up for doing the demo of it there, Nick, because one of the key things we've said is about being able to identify various things within the stack. So the Edge Gateway, proving that the Edge Gateway is what it is when it's speaking to services and when services are speaking to other services, being able to say, I am the web server. I am the backend. I am whatever it is. Being able to prove, using things like X509 certificates and a trusted MCA, being able to prove for this MTLS, this mutual TLS, in addition to just encrypting the actual network, being able to prove identity between services is very powerful. And we showed you some console config earlier on and Nick's gonna do a bit of a live demo in a second. But if you're looking to create intentions, be able to say, in this case, we're using the command line interface. Doing console intention, we're creating a denial rule that the web cannot talk to the database. So I mentioned it on your web middle tier database. It makes sense that the web can talk to middle tier and the middle tier can talk to database, but web shouldn't really be able to talk to database because the web may be in the DMZ and you may have credentials in the database, for example. So using this kind of service level breakdown of what can talk to what, I personally found it very intuitive. Now you don't have to use the console CLI or API. If you're a bit of a hipster and Nick totally introduced this to me, SMI, the service mesh interface was announced at the recent KubeCon in Barcelona, Microsoft, Ashikor, a bunch of good folks involved with this new spec. And it's a way of defining kind of an abstract level, overall the service mesh is concepts like intentions, concepts like access control lists. And Kudos to Nick in his emoji demo he's already created an SMI example using console underneath, but defining CRDs based on SMI. So there's a traffic target CRD, which allows you to map to various routes within your cluster and you can specify what can access what. And it's a default deny. So you add on these traffic targets saying from a source with a ambassador and we're identifying it using the Kubernetes service account, ambassador can talk to the destination of the Emojify website. Again, also identified by our service account. So I've really enjoyed playing around with the SMI stuff. Nick's walked me through a bunch of this thing. It's a real nice way of defining and putting it in the field, using things like GitOps, we all know and love good GitOps and Weaveworks folks. If you're using a kind of GitOps pipeline, you can define all these rules in YAML in Kubernetes CRDs like your rest of your workflow, like the rest of your pipeline. So it just makes it easier to do the right thing. I'm a massive fan of making it easy to do the right thing and making it hard to do the wrong thing. So this for me was a real nice kind of find and Kudos to everyone involved in the SMI. Right, Nick, I shall hand over to you now. I hope the demo gods are smiling on us. You are the man in charge of this one. Nice job sharing. I will just share my screen. There we go. So let's have a look. So what we're gonna do is we're just gonna look at how we can set up Ambassador and it's pretty easy. And we'll look at some of the configuration and things crossed, it'll all work. But we'll look at the full process anyway. All right, so what have I got? So I do have my sort of Kubernetes cluster running and I've got some stuff already installed. I do have a couple of applications and I've got console deployed onto Kubernetes there. Console, I actually deployed with the Helm chart. So it was literally a case of Helm install, which was super nice. Lots of sane defaults there to get you kind of up and running, but it's pretty straightforward and you can find that at Hashicorp GitHub and it's console-hem. We are working on getting that pushed into the official Helm repos, but for now you can kind of grab it from there. So we've got console installed. We've just got a couple of apps. Again, this is just to save a little bit of time. Let's take a quick look at console running here. I'm not exposing console or any of my UI to the general public because that would be a bad thing. I'm just gonna access it through here through Kube proxy. So this will just open up my console UI. Come on, thank you. And there we go. So that's all up and running and we can see some services in there. Console will also sync your Kubernetes services that you've got defined so that you can kind of do that two-way traffic stuff, but I've just got a bunch of stuff there. So let's begin. So the first thing that we need to do is we need to install Ambassador. And installing Ambassador is super, super easy. We have the configuration and I've literally just downloaded this straight from the Get Ambassador IO website. Really easy to get this set up and installed. So everybody should check that out, please. It's like so good. So we have the standard sort of set up. So we do need some RBAC in order for Ambassador to work because Ambassador is gonna need to kind of listen to some Kube APIs in order for its CRDs to work. So we do have those CRDs in there and then we've got the sort of the Podspec and things. It's fairly straightforward sort of Kubernetes stuff. In order to work with Ambassador and Ambassador works with Linkadee and Istio and Console so you can work with a bunch of different service measures. You kind of leverage the various plugin. So I'm gonna install as well the Console Connect plugin. Again, this is just stuff that I've downloaded from the Ambassador website. There's nothing really clever going on there. It's just standard sort of Kubernetes services are back and deployments. And I'm gonna need a service because I wanna be able to direct my SSL and my standard HTTP traffic to Ambassador. So one thing that kind of we also need to do is we need to use the Ambassador annotation here because we need to configure Ambassador to tell it that it should be using the Console Resolver. So it's gonna use Console Resolver for its service discovery. So let's get that sort of up and running because it can take a little bit of a minute or so to configure the load balancer. So kubectl, apply-f and I can run everything there. Right, cool. Now what we need to do is we need to configure our edge with TLS and sort of Daniel sort of spoke about why that's really important. We don't wanna leave any gaps. So the setup that we have for you is that we are using Cloudflare and we're using Cloudflare's WAF and that's at the very edge. So our HTTP certificate at the edge is provided by Cloudflare and the Cloudflare WAF is gonna be sort of protecting the origin. What we need to then therefore do is we need to have Cloudflare speak to our origin and again, we need that to happen over HTTPS. We don't want any gaps. And what you can do with Cloudflare is that you've got the capability of accessing the sort of the origin certificates. So with an origin certificate when Cloudflare makes a request to the origin which is ambassador, it's gonna validate that the certificate is correct. So it's gonna make it really difficult for anybody to kind of spoof that origin. And this is all available. So I've already downloaded this certificate. I've got it sitting here on my computer. So if I just, can I take a quick look at that? What did I call it? I have it called origin.cert. Standard TLS certificate. So nothing clever going on there. And what I'm going to do is I'm gonna use one of the ambassador CRDs which is this module here. And this is what I'm going to do to configure my origin, my ambassador to use the Cloudflare origin. It's gonna just use a Kubernetes secret. So what I need to do is I need to load my certificate and my key into a Kubernetes secret. So I can do this. There we go. Just using again, standard sort of Kubernetes workflow, kubectrate secret, and I'm going to give it the origin and the key there. That's gonna create that and add it to Kubernetes and Cloud, sorry, ambassador is now gonna be protected with TLS. So let me just double check that that module's been applied. Yeah, that's unchanged. All right, so we're good. So let's see if our load balancer has come up. So our load balancer there, I've got this stack running in Digilotion who I absolutely love. I think they're great setup. And I do have a different IP address. So let's now configure my DNS there in Cloudflare. So, right, just setting my domain name up. So what's next? So I've got ambassador running. I've got my edge setup. What I need to do now is I need to kind of look at some ambassador configuration because at the moment, even though everything's kind of like set up, there's nothing rooting. So let's take a look at that configuration. Before the ambassador configuration, I'm going to use again the CRDs. So I've got the mappings here and this is my first mapping. So I'm just gonna say anything for slash goes to my Emojify website sidecar proxy. So I'm sending any of my root traffic to my Envoy proxy for my Emojify website. What I'm also doing is I'm using host header matching and that's gonna give me the capability of using the same load balancer for multiple domains. I have my API. So I'm running the website as a RAAC GIS website has no backend. The backend is a separate JSON based RESTful API which is accessed directly. And I've got that mapped on a path of V2 API. So again, it's on the Emojify two-day domain. So I'm using the prefix this time again with a host and I'm gonna map it through to my Emojify API service. Setting up configuration elements. I'm using default stuff here but I can configure my round different load balancer types, timeouts, retry policy, all of those wonderful reliability patterns I can do in there. So I was using that host mapping because I have two domains. So I've got Grafana.Emojify today and I've just got my main website. Because I can do this with ambassador, it means that I can secure both of those with HTTPS. I can secure the traffic right through the stack but it also means I can use a single load balancer which is kind of nice. So let's apply that. Oops. I'm gonna apply those ambassador routes and what's gonna happen is ambassador is going to reconfigure itself and that'll happen pretty much automatically because of the CRDs there. Let me see if I can just get the ambassador dashboard up. Again, none of this is exposed. I'm using kubectl proxy which is a super nice way to kind of just get into that. But here's my ambassador's dashboard. You can see that I've got my routes. You can also see that these are not healthy and that's because we haven't configured the security on those yet. So if I go over here and I refresh, it's still not gonna work. And the reason that it's not going to work is that I haven't configured that implicit security between the services. So I want to be explicit about, sorry, the intention that one service can talk to another. Now, I could do that through the UI. That's not particularly great infrastructure as code but I can also do it through SMI and one of the things I love about SMI is that it gives you that kind of consistent workflow no matter which service mesh you're using. And I really do think it's a great initiative. So let's see how easy it is to configure the security. So Daniel showed you kind of a snippet before but this is an SMI specification. Again, more CRDs, that sort of great workflow that you're used to. And I can explicitly sort of configure the allowed routing between sources and destinations. While I'm gonna, let me just apply that. So they're all being created. So while those are being applied and that's gonna be pretty quick, what I wanna just sort of quickly show you is how do you actually service mesh enable your application? And you kind of do that real, really easily. So with console, we use annotations, some other people might use CRDs but in either way, it's still pretty straightforward. I don't have to define the actual on voice sidecar. I'm just configuring these annotations to say that I want it to happen and when I run my pod, that's automatically gonna get injected. So I've only got two containers here. I've got my API and I've got stats D because my API is using stats D, my envoy is using Prometheus, metrics, all the things. Like we're going all in. But again, really, really, really sort of straightforward, I hope. And refreshing my intentions there, you can now see that that SMI configuration that I applied has now sort of clearly defined those permissions. So the important one down the bottom here, all services are not allowed unless there's sort of a clearly defined override. So ambassador is allowed to talk to my Emojify API, ambassador is allowed to talk to my website and to my Grafana instance, my API is allowed to talk to my cache and my API is also allowed to talk to my face detect. Now, the moment of truth. And it works. So we've now got everything up and running. Let me just give that a quick test. You're on the webinar and you wanna try this, this is public, so it's HTTPS Emojify.today and I'm probably gonna regret saying that, aren't I? Because it's not a great big server. But that's working. So Emojify, like soon to be a CNCF project, it's incredible, basically allows you to replace faces with Emoji. And if I do a search for Danielle Bryant, let's have a quick- This could be embarrassing. There's a lot of wrestlers out there called Daniel Bryant, I think. Well, I think we'll grab this one here. Copy image address. All right, there we go. And I'm just gonna paste that into there and hope that that is a JPEG image and wait. And there we go. That's put off. But we've literally just configured sort of that sort of end-to-end process. I mean, we didn't install console, but honestly, give it a try. Like it takes a couple of minutes. The amount of time it takes for the container to pull onto your CUBE cluster and for the health checks to start passing. There's Microfiner dashboard. So we've got traffic running through there. All of the observability which we get out of Ambassador and Envoy, which is pretty nice to see just a super simple dashboard. And I think, have I missed anything? But I think I'm pretty much done there, Daniel. Looks good, thank you very much. Yes, I just want to wrap up quick conclusion slide. Then we'll grab some questions if we've got any. I can steal the screen share back. And this is literally just really referencing what we said at the start. So hopefully that's giving you an idea of the kind of end-to-end sort of flow with both the high level and the actual code itself. All the code that Nick demoed is available on the Intwebs Mojify app. I've got the links in the next slide actually. So you can download the code, run through the whole thing yourself. I've done that a few times and I find it really useful just for understanding like all the components involved and how to like, Nick says how to put your workflow, how to adapt your workflow. With things like SMI and so forth, it's really, you don't have to adapt your workflow. If you're using Kubernetes or using YAML, it's pretty much the same process. But I find that's really nice. The Mojify app is a really nice way to play with all this stuff together. But in conclusion, I mentioned at the beginning and Nick and I really think this is true. We worked in a variety of roles, summed together and obviously many a part, but security is everyone's responsibility. Really can't underestimate that. This is really, really important. As you're bringing in more stuff to your stack, yeah, bar means, you know, Kubernetes, VMs, bare metal and these things, but bear in mind that the heterogeneous nature of this means it says, as Almond says from Mashko, it's a multi-cloud, multi-platform, multi-service world. And we have to respect that from a, you know, operational standpoint, but definitely from a security standpoint too. We have to make it easy to do the right thing. Defense in depth is really important. We've only really focused on edge and into service comms today. Clearly there's a whole bunch of other things involved with security, which we're not trying to make smaller. They are super, super important. We've only got an hour, so we've only focused on so many things, but we think this is really important, this security all the way through. And it's now much easier than it was, say a year, two or three ago, because of things like service mesh, because of things like SMI are making it much easier for us as engineers to configure this stuff. Do be careful at minding the gaps though. It's all too easy to misconfigure an origin or not have TLS on the first hop from gateway to service mesh. I've seen a bunch of things over the last few months. We really need to pause, think about your threat model, think about your whole flow end to end and mind the gaps. And we've said it before, but make it easy to do the right thing. All security must have a good user experience, otherwise engineers are gonna work around these things. And the only thing worse than having no security, is having security that no one uses, because you think you're secure and you're actually not. So this stuff is really important. That's final slide, bunch of references, and we've done some stuff on InfoQ, we've talked on the blogs, and we can find this on the Kubernetes blog as well. And hat tip to Todd, Radell and the team at Haskell, they put together an instruct, kind of online tutorial where you can play around with ambassador, play around with console. I think SMI might be there, if it's not, we can soon add that I'm just thinking. But you hat tip to Todd and the team, fantastic way to sort of have a playground to safely try this stuff out without actually having to install anything on your local machine. On that note, or code examples, of course, the Modify app, courtesy of Nick there, I think we're ready to take any questions, Nick, Sheila, on that one. So thanks for everyone's attention. Wonderful, thank you Daniel and Nick for a great presentation. We do have some time for some questions. I think we have about six minutes. If you have a question that you'd like to ask, please drop it in the Q&A tab at the bottom of your screen, and we will get to as many as we have time for. And I think there are two right now, Nick and Daniel, if you wanna take a look. Yeah, they're kind of very, very similar. I can probably answer both of those in the same, with the same answer. So we predominantly, when we sort of built console, console is a cloud native product. It predates modern schedulers like Kubernetes. But it was definitely designed for the sort of the cloud native world. Now, when we sort of introduced service mesh features, we had to consider that a lot of folk were still using console with virtual machines that were running it in EC2 instances and AWS in GCP and in Azure, and that they needed the ability to extend the service mesh beyond their sort of Greenfield Kubernetes cluster, but to actually encompass the entire of their sort of a state. So console was designed to be incredibly performant to handle the large workloads that we were sort of had existing. And from a service discovery perspective, consoles always had service discovery baked in. We had to use the ability of having this kind of neutral service discovery so that the pods would be able to talk to virtual machines. Virtual machines would be able to talk to pods, Kubernetes clusters, which are completely not federated, would be able to communicate together. So it was designed predominantly for the performance, for large environments. I do want to caveat that and say that you don't need to be running 20,000 nodes to be able to use console and ambassador. It scales regardless of what you're using. I've got like four pretty generous nodes in digital ocean there. There are folks using it with hundreds of nodes and as I say, scaling it up to the tens of thousands. So it really scales right across. I saw someone asking about sharing the links again. We'll share the deck afterwards, everyone, but I'll just put it on screen now if people do want to make a quick note on the, in particular the Emojify code. Thank you. And do we have any other questions? Going once, going twice. I don't see any more listed. You can always find Nick and I on Twitter. So we can always ask us on there. You can find us at all the Kubecons. Hopefully Kubecon NA coming up or other conferences too. So please do come and approach us. We are, we like to answer questions in the real world as much as the Q and A yourself. Thank you, Shila. Great. Well, thank you, Daniel and Nick for a wonderful presentation again. And that is all the questions we have time for today. Thank you all for joining us. The webinar recording and slides will be online later today. We're looking forward to seeing you at a future CNCF webinar. Have a great day, everyone. Thanks, everyone. Thanks very much. Bye.