 All right, I'm trying to take it like this. I'm gonna do better. Okay, let's go. Hi, everyone. We now have Gerald Cruz talking about using traffic as Kubernetes ingress controller. Thank you. Just before I start, I have stickers, if you want cool looking stickers for your computers. So, first of all, thanks for being here with me, despite the fact that it's getting late. It's always nice to have the opportunity to show developers how much fun you can have, how much unexpected fun you can have with a reverse proxy. And I know that it is customary to let speakers talk about their accomplishments, even though nobody really cares about them. And good news is that I don't have anything that literally qualifies itself as an accomplishment. So, it'll be over soon. So, my name is Gerald Coyes. I'm a senior developer at containers. I publish stuff on Medium, Dev2, Twitter, once in a year. So, if you like the presentation, you can follow me there. I do my best to write comprehensive articles on different topics. Note that one of the latest article on Medium is more or less what we will see today here. So, you might want to check it out later. It's a 10 minute read. So, yeah. Before working for containers, I used to have a more traditional job as known as not working on an open source project. I worked for an insurance company where I led the transition from a legacy monolith to a microservices architecture with success. The project involved 80 people using our ever-evolving architecture and needed a little more than two years of development to reach the point where we were somehow satisfied with the result. To give you some numbers, we ended up with a cluster of 40 machines running less than a thousand microservices. 40 machines doesn't sound that big, but it represented five terabytes of RAM and 320 processors, which all of a sudden looks a little bigger. A little word about containers, which is my current company. We deliver traffic in the sense that we actively develop the product and we do all the tedious work, sorting issues, prioritizing features, writing documentation, who loves documentation. It takes a lot of our time, but it's quite fun because working on this project is a fantastic adventure. Important fact, we hire, so feel free to check out the available position and to literally check out you. You might see that later. Anyways, so today I'm here to talk about traffic, modern reverse proxy slash load balancer slash cloud native edge router to use trendy vocabulary. If you've never heard about traffic, a little catch up, it's a popular open source project with to this day more than 280 contributors. It might be 300, yes, it's 300 today. We have more than 150 million, actually 160 million downloads and a little more than 17,000 stars on GitHub. So right after the presentation, nothing prevents you from using it or making it your own by bringing new features with full requests. It's really an open source project. In the unlikely event that you stumble onto a bug, know that there is a helpful command line to auto fill a bug report with your environment and configuration file. So it will give us all the information we need to help you out. So now that I'm done with the formalities, it's time we talk about traffic. A reverse proxy, okay? Nothing that sounds too exciting, right? Even two years ago, when the project was born, you probably know that there were many available production proven reverse proxies available. So you could legitimately ask the question, why did you bring us yet another reverse proxy? Why? And the answer lies in the same place the product was born. It was a regular office with software architects on the ready to transform business challenges into regular software bugs. And at the time we were involved in passionate debates around microservices. To be honest, at the time we weren't aware of the term microservices. We knew that to survive, we had to transform this sluggish monolith that had become this software contraption. Nobody wanted to work on any more. So we had to go from point A, the big ball of mud on the left, to point C, the neat ecosystem with empowered components that would be aware of their responsibilities. So the premise of this new world were appealing. It was scalability for deployment and development, simplicity, security, efficiency, and the more we studied this architecture, the more we realized that we weren't alone. The vanguard of people trying to keep their sanity in a never evolving world was already there and some small companies like Netflix had already worked on microservices and they had open sourced frameworks to make it happen. But unfortunately at first sight, all these components seemed to go against their simplicity principle. All of a sudden, we were facing multiple components and we weren't ready to deal with them. So our decision at the time was to stay clear of these, at least until their necessity was obvious. So little by little, our new infrastructure started to welcome additional elements. To decide which service would be hosted on which server we used an orchestrator, which was at the time marathon. To know where the services were, we used a service registry, URIK. To make the service callable in HTTP, we used a reverse proxy that knew how to configure itself from the service registry, that was zoo. To customize further the URLs to our microservices, we used a front reverse proxy, HAProxy. To configure this reverse proxy, we wrote custom scripts. To know when to rewrite the 2000 line configuration file, we had to define timers, watchers to detect when the services were deployed or removed. To ease the developer's life, we wrote Java framework that was responsible for registering the services in the service registry. And because there is a quarter's life beyond Java, we wrote wrappers to enable developers to use other languages like PHP, Python, R, or others. So in the end, it was a well-oiled machine, but not flawless. We had to struggle with synchronization problems between the service registry and the orchestrator. We had to fight against the delay of the multiple refresh that were involved here and there. We had to ensure that no connection would be lost during an update. And then arrived other issues, health checks, readiness. So it did the job, but we kept thinking there had to be a better way. A way that would get rid of the multiple configuration file that was spread across the system that would get rid of all the glue components that would replace all the tooling. And unfortunately, there were none. So luckily for us, someone didn't give up. He knew he could do better, and he started to work on a very small Go script that would answer the only important question that had yet not found its answer. Where is my service? Because in the end, all the tooling involved around microservices are dedicated to this simple question. Maybe you can find Charlie while I talk. Anyway, the reverse proxy job, this edge router job is just this, to route the request to the corresponding available server. So where is my service? The one incredible thing when you think about it is that all the components to answer that specific question are already there. One of the great advantage of microservices is that you can look at your service as a black box that does some job on its own. And since it is on its own, it is packaged to be deployed anywhere, anytime, on premise, on your infrastructure or on the cloud as a container. And to handle this serious deployment task, we have many solutions at our disposal. One of them being orchestrators, like Kubernetes, but SWAR, Marathon, Service Fabric, Amazon, and so forth. These tools responsible for deploying your service to the right place, they already know the answer to that question. Your service is there, I deployed it. So in the end, why would any of us ever write a configuration file? That is the translation of information we already have. Because orchestrators know services by their names only and they don't know nor do they care about what they can do. Services are workers that need space, power, network to do their job. Reverse proxies, on the other hand, need to accept requests and take the requests to the matching service. So not only do they care about where the services are, but they need to know what they can do. So the configuration file for proxies are the glue component that declare the service capabilities and the rules, so requests will eventually end on the right service. But what if I told you that you don't need these configuration file to join the two? What if your reverse proxy could use information already available? It'll sound crazy, but traffic was born from the simple idea that adding information to an existing database was better than duplicating the data into a specific configuration file. So why traffic to keep things simple? Instead of writing a separate configuration file, you'll attach information to your service and containers, information that will be used by traffic to create the routes in real time. This diagram tries to sum up what the product is about. On the right are your infrastructure, multiple clusters, multiple services. And on the left are the incoming requests from the outside with no knowledge of your IT, just plain requests. And in the middle, automatically connecting everything together stands traffic. And to create the routes from the requests to the services, traffic doesn't need to read the information from a configuration file, it knows how to deal with every major technology provider thanks to their APIs. And one of these technology again is Kubernetes. And that's it. So to its core, traffic is really an open-source project that was born from this idea. So let's talk about how one would configure traffic along with Kubernetes. For those of you who aren't aware of the concept of Kubernetes, let's do a very quick recap. So Kubernetes is a cluster of technology. It shouldn't come as a surprise. To us, it makes a cluster of computers act as one, whatever their location is, if there are virtual machines or variable computers. Machines in a Kubernetes cluster are called nodes. They come and go. They can break down. They can be removed from the cluster and new nodes can be deployed to handle peak loads. They are volatile. So we have places where we can deploy stuff and now we want to deploy containers. Kubernetes adds a layer around containers, the pods. Usually a pod will contain only one container but just know that you can deploy more than one container. You will deploy the pods telling Kubernetes how many of them you want at a specific time. Basically, we'll say, hey, Cube, I'd like to have two replicas for my web app and four for my REST server and it will be Kubernetes' job to ensure that at any time you have this live on your cluster. So at this point, we have our, oh, sorry. Because there are multiple pods, you add the notion of services that basically say a service is any available pod that handles this specific task. So at this point of time, we have our application on our cluster, but unfortunately, nobody can access the cluster. So we have the option to expose each service directly but it can prove costly because you'll need extra components from your provider or we can choose to use ingress and an ingress controller. This option consists of opening a door for the outside to your cluster. You first define rules when the requests look like this then go talk to service A. And to implement those rules, you use an ingress controller, a reverse proxy, an edge router and traffic is a great one. And among all these cool features, traffic is super easy to set up. My recommendation for people not familiar with Kubernetes is to use Helm, which is a package manager for. And basically you do help install traffic and you're good to go. Here it is the actual command line I use to install to configure traffic on my Mac for Q for Docker for Mac or whatever the name is. I asked to enable the dashboard and to publish the dashboard on dashboard.localhost. This is the line yet there. And when I go on dashboard.localhost, I can see the dashboard of traffic. It works, traffic is my ingress controller. So now just, yeah, so you know, you can see that there is one service here, which is the dashboard itself and one rule to redirect the traffic to the dashboard, which is check if the host is dashboard.localhost. So before going further, yeah, I wanted to have all the tools to keep using traffic if you want after the presentation. So I'll talk about the concepts you'll need later. And don't worry, there are only three, four concepts you'll need to understand. So you remember the diagram. We said that on the left were the request and on the right our infrastructure and in the middle traffic. So let's make it simpler and focus on a smaller chunk, a single request, a single service on a single cluster. Let's see how the request gets handled by the service on the right. First things first, when you start traffic, you need to tell it what infrastructure component you have. In our case, Kubernetes. Basically you start traffic, hi traffic, I'm using Kubernetes and traffic will answer, okay, got it. I will keep searching for new services. These infrastructure components are what we call providers because they will provide all the information traffic needs to route the incoming request. Of course, as we saw in the full diagram, you can have traffic configured to listen to multiple providers and at the same time, but it will stick with Kubernetes. Then, because you don't want to waste computer time on analyzing data you don't care about, you tell traffic to open doors for the outside in the form of entry points. In their most basic form, entry points are just ports you will accept connection from. But you can go further and add authentication, SSL and so on. Then traffic will at some point detect new containers and when it detects a new container, it will create what is called a backend. The configuration for the feature that exists on your infrastructure. Ultimately, the backend responsibility will be to forward the request to the container capable of handling the request. And backends in their simplest form are just IPs. It answers the question, where is the container? You can add more to them like load balancing rules, shortcut rules, health check endpoint and so on. And now that we have defined backends, the only thing left is to define which requests should be routed to them. And this is what frontends are about. They define the characteristics of the request that have an impact on the way they should be handled. For instance, you will have rules on the hostname which you can add rules for path, headers and so on. So these rules, they will be attached directly on your containers. So you don't need a separate configuration file. All the relevant information for a component are attached to the component itself. So we will sum everything up. Providers provide information so traffic can configure itself dynamically. Entry points listen for incoming data and set apart the request we want to analyze and the request we don't even consider. Frontends contain the rules that will eventually route request to the corresponding server, backend, sorry. And backends represent the available features in your system, along with how to invoke them. And in practice, we have this request, this incoming request. It is on port 80, so we handle the request. Frontend says, is it backoffice.domain.com? Yes, it is. So it route the request to the corresponding backend which is the container IP and it goes to the container IP. These are all the concepts you need to understand how traffic works. Finally, halfway through the presentation, it's time to deploy things on our cluster. So let's do an example. This is the story of a pod named Who Am I that is defined by this YAML. So we have a deployment. It deploys the application Who Am I there, which is a container. And then we have a service for the corresponding container. There, this pod meets an ingress. So this ingress wants the request with Who Am I.localhost to go to Who Am I service. There. So we define here the ingress. We add an annotation to tell traffic, traffic use this ingress, process it. And the ingress says, if host equals Who Am I.localhost, then go to Who Am I service. You cube cut all the files. And in the end, they have a route together. You have the backend, the new backend and the new route for your backend, dynamically and automatically. And this is an example of invoking the service. So it was definitely a cute example. For the sake of repetition, we will do the same example with this time to replicas. So we will define who are you deployment this time with two replicas. Application Who Are You there. So we have a service this time pointing to who are you. And we have an ingress that will route the request Who Are You.localhost to Who Are You service. We cube cut all the file. And all of a sudden, traffic will detect the new route once again. But this time, since we have two replicas for our service, it will create two servers for the same backend with one route. And it will be traffic responsibility to load balance between the two instances that are available. And of course, if one of them breaks down, then traffic will update the route and so on. So it is completely automatic and dynamic. So this is the whole picture of what we did. So we have incoming requests that arrive on the ingress controller. I will remove the mouse there. To route the request, traffic uses the ingress and it will redirect the route. That's all there is to know. I don't know how much time I still have left. 10 minutes, okay. I will use the five last minute there. There is much more to traffic than just being able to detect services by itself. It has many features and right before I take your questions, I wanted to show you a very popular one that deal with security. I was very proud of my Captain Traffic America there. So if you're like me and you happen to have struggled with HTTPS, the classic way, you probably hate the certificate miss. I used to accept the mess because I thought it was the price to pay for security. But that was until the day I discovered traffic and it's let's them crypt feature. Basically with traffic, all you have to do with a regular cluster with plain old HTTP services that are not aware of anything related to HTTPS, all you have to do is to configure your email address and traffic will talk to let's input to get the certificates, to generate the certificates and everything will be exposed through HTTPS. I almost lied, but this is the actual configuration file you would use for traffic to configure HTTPS. There. Of course there are other cool features. This is the different ways. You can add basic authentication. But there are many, many, many other features in traffic. We could have talked about the other providers because everything that works for Kubernetes works for Docker, Swarm, MesoSmarathon, Catalog, Oracle, and so on, Rancher. We have many reverse proxies features, rate limiting, circuit breaker, DRPC. We have security features, HTTP features, web sockets. It is compatible with many tracing or metrics tools and other features that will come, hopefully very soon. So, thank you, I hope, there. If you have any questions, I will be happy to answer them. For massive BDOS attacks? I can't hear you, I'm sorry. Massive distributed denial of service requests. Do you handle the features? I'm trying to listen very closely. Do you have facility for distributed denial of service request attacks? I didn't get the question. We have, we definitely, it might not be the best front tool for BDOS, of course, but there are, we have rate limiting or features like that that can help avoiding BDOS. But I might not recommend traffic as being the tool for protecting your IT from BDOS. And I don't think that it might not be the reverse proxy job to do that actually. Are there any plans to add support for Kubernetes Secrets as a KV store for the Acme support? Because what you showed there will only work with single nodes. I don't want to say something silly, but I think that we support Kubernetes Secrets. Or they're not, or if we don't- You support it for TLS, that's a, for a different thing, for TLS keys, but not for the Acme keys. Oh, then there might be a pull request with that feature or I'm not the Kubernetes expert. I remember that there is a recommendation about, I can't tell you, I will look into it and answer your question right after the presentation, if you will. Any more questions? Okay, thank you. Thank you. We have the prize distribution at the keynote place, which happened in the morning. That's the Metcalflage, so it'll begin at three. And I'm very sorry for my English.