 Welcome to my presentation title Tools I Wish Existed three years ago to create a SaaS platform. My name is Mauricio Salatino and I will be talking about a bunch of open source projects. I'm at Salaway in Twitter and I'm a staff engineer for VMware and I'm working 100% of my time in a project that's called Knative. I'm a Kubernetes addict and I really enjoy open source so I enjoy building frameworks and abstractions that you know helps you to build better applications and faster. I do share my learnings in my blog that it's Salaway.com so feel free to check it out if you're interested. Before I start my presentation I wanted to share that I'm writing a book for Manning that it's titled Continuous Delivery for Kubernetes and it just entered the Early Access program so you can find all the details about the book and the topics that I'm covering here in this link if you're interested in that feel free to join the the program. During today's presentation I will be talking about the Software Access Service platform anatomy from a developer perspective. I do consider myself a developer so I'm looking through this lens and then building and packaging staff pretty quickly. This presentation it's not long so I will just be showing some projects that you know you can use in order to do something related to building and packaging. I will be talking about application infrastructure that's a little bit longer but again it's pretty rushed because the amount of time that I have for this presentation and then I will be talking about making your developers life easier by installing tools like Knative. So let's start by defining what do I think that a software service platform is and why it needs to be self-service and multi-cloud enabled as well. So I when I think about like building a software service platform I do feel that you need a self-service approach right like whatever your customers are you need to provide them a way for them to just go and parameterize your platform and just click a button and then provision some components for applications so they can just go and use them. Customers between quotes because they can be internal or external you can have internal teams of different departments that requires to provision you like some components for them so they can do their work or they can be external being like real customers. If they are internal then sometimes we can decide where we can run the software sometimes we can't but if we can our life is going to be definitely easier. If they are external and they are like paying customers we need to figure out if they can if they are okay for us to choose where we want to run our services for some industries that's okay for some other industries that are more regulated you need to make sure that you are running in cloud providers that they can use and they are allowed to do that by regulations and also you need to figure out how you are going to charge for the service right and that's a completely different topic that I'm not covering here but that's usually a very important question to answer and more advanced topics for software providers that's kind of like where I usually tend to have my experience on is that you need to also allow your customers to run your platform like your software as a service platform in their on-prem cloud services as well that's again because of regulations because of the nature of the software that you are providing for them this is way much more technical but I've seen this happening a lot in the tech industry. So it is very important as you already saw the fact that you can choose different cloud providers and to have a cloud strategy like a multi-cloud strategy in place for building your platforms it's really important but having clear definitions it's even more important and this blog post from HashiCorp CTO defining different dimensions of what multi-cloud means it's pretty spot on right and when I read it I realized that what I'm talking about this during this presentation it's all about these two first categories workload portability and workflow portability and the main reason why I'm a hundred percent sure that what I will be showing you gives you that it's because we are using the Kubernetes APIs as obstructions for different cloud providers and as soon as we rely on these APIs we should be fine by using the same tools across different providers and making sure that you know you can just move your workloads from one provider to the other. I will not be covering data portability or traffic portability I guess that I will just attack those topics when I build more complex examples of the ones that I will be showing you today. So the scenario for this presentation is you know a conference platform that you can create instances and you can just organize your conference on top of it right so you have different sections different services to implement this functionality you have the agenda service you have speakers you have proposals and all these examples can be found in this repository that it's down below it says salavoy from monolith to gates where you will find step-by-step guides tutorials on the examples that I will be showing during this presentation for all the tools that I will be showing but in reality I'm building these these conference instances with different services they all talk to each other using REST calls and of course in real life you will have more and more services and you need to prepare for your cloud native applications to expand and add more services to them so you need to have the right tools for doing that and make sure that you automate as much as possible but what I really want to build here it's something more like this I want to build a self-service conference platform where your customers can go parameterize and then create different instances for different events right so you have a conference in Argentina in the UK and in Denmark you have your customer going and just clicking in some forms just entering some information clicking create and you know a new instance of this platform should be created for them when they don't want to use that anymore they can just shut it down right like after the conference was organized they even happened then you just can't remove that instance because nobody is going to use that anymore so this is what we are aiming to build and all the tools that I will be showing you next are related to that okay now that we know what we're building let's talk a lot about building and packaging stuff with tecton and with helm so now that you know that the application is composed by four services we need a way to automate the creation and building all these artifacts which will include testing and just publishing artifacts to different repositories so for that I usually tend to think that we will need to have a pipeline engine in this case I'm suggesting to take a look at tecton but the concept that I want to share here is the idea of service pipelines for every service that we want to build we should be able to have a pipeline that is automating all the build process for all these artifacts and it's also looking for example into git repositories to see when changes happen if you follow a trunk-based development approach which basically is every time that you merge something into the main branch you are just going to create a release for your artifacts you want all that stuff to be automated for you and this pipeline instance and the pipeline engine that it's running this pipeline needs to be configured in the right way with all the infrastructure required for you to publish these artifacts at the end of the day different kind of pipeline it's having an environment pipeline which basically it's going to sync you know a git repository that has the configuration about the service that you need to be running in a Kubernetes cluster with you know a real Kubernetes cluster this is more commonly known as GitOps but here I'm just calling them environment pipelines because for every environment that we want to have in our you know in our customers for our developers for QA for test customers or different scenarios you will need to have one of these pipelines that basically takes the configuration from a git repository and applies it to a live cluster so you avoid going to the cluster and manually making changes in there which can cause a lot of problems you can implement this with different tools than tecton but in this case I'm just suggesting you tecton because it definitely brings kind of like this declarative approach where we can just define these pipelines using Kubernetes resources and they are going to run inside our Kubernetes cluster as well so let's take a quick look at a demo of how tecton works and how do you define these pipelines using you know Kubernetes resources so when you're using tecton basically you install tecton inside your Kubernetes cluster and you can find here in my repository from only two gates and the tecton the instructions to install tecton and to just also configure it to run the pipelines that I will be showing that are basically defined to build my services in my application and also to sync against environments in live clusters so if you go to these instructions you will see that tecton comes with something that is called tecton dashboard that it's optional that you can install as well and what I have here is I have two pipelines definitions one for a service that it's called api gateway and that basically contains all the steps to build that service and to publish all the artifacts and then the staging environment pipeline which basically is in charge of looking into a git repository that has the configuration for what I want to deploy into the staging environment and sync that to a configured kubernetes cluster so if I want to run these pipelines I can do this by sending a command to kubernetes and that command you can also find in here which is basically creating a new instance of a pipeline with a bunch of parameters I'm creating a new pipeline instance pipeline run in this case it's called and the pipeline of course you can parameterize it with different you know parameters you can define the branch word that you want to build and where is the docker file the user that you want to use to publish docker images and all that stuff the version that you want to generate and as soon as you enter all the parameters you will have a new instance of that pipeline that it's going to be running and you can monitor here in pipeline runs so I have a new instance I have created a pipeline by just defining a kubernetes object that I can show you here so resources service pipeline this is a kubernetes resource that you apply to your cluster and now tecton has this definition available it's can pipeline it comes from tecton and the important part in here besides all the parameters are the tasks which are basically the things that you want this pipeline to run in this case I'm cloning a repository I'm using a bundle from the tecton catalog that it's that's a pretty fine behavior it's pretty common to want to clone git repository so they already include that functionality for you it's pretty common to build the stuff in the java world with maven so they have already a task that will do that for you and it's pretty common to build you know docker images and publishing docker images and you can do with canico so it's pretty straightforward to configure these the main idea here is that you define once a pipeline template and then you just use the same template for different services so after running this pipeline what I would expect is having all the you know artifacts required for me to deploy this service this new service version into an environment and for that then I can trigger the staging environment pipeline that will sync again this repository that I have where I have the new version of my service specified to a live cornerist cluster so I can go and see my service up and running that's pretty much what tecton is it allows you to declaratively define these pipelines using kubernetes resources and then just run these pipelines inside kubernetes clusters so you can automate building and deploying all the services that you are that are required for your application to run when building stuff it's also important to recognize that we need to have some kind of package manager for our kubernetes resources so when I showed you the pipeline before I was generating a docker image and pushing that to registry but I also will need to package and version all the kubernetes manifests that I need in order to deploy these container images into my clusters and helm is one of those projects that you can use to do that helm uses this idea of having a chart that it's easily to version and distribute also so all the teams can install the same thing and usually the recommendation here because I don't want to spend too much time on this to keep the helm chart and the definition of the chart close to your source code so if you have a service it's quite good to have the chart in the same repository for example so you can version all the things together and when you run the pipelines kind of like the last step should be to create that chart and just publish it to a chart repository in helm you can use composition and dependencies but you need to be very careful with that for example you don't want to include too many dependencies in your charts which will make it the chart very difficult to configure and remember that also when you're using helm there is no need to install the kubernetes manifest using helm install you can use helm template to print out all the manifest and then use customize to just change for example some of those yaml files depending on where do you want to apply them so let's take a super quick look at what you can do which is basically installing the application by just running a single command line so I've created a chart with all the services that are required for my application and I can just run this command helm install the name of the release that I want to create and the name of the chart that I want to install that's published in my own helm repository and it's fetching the chart and basically installing that to the configured kubernetes cluster that I have in my terminal so after a bit I should be able to see that the chart was installed and that basically means that now I can list all the services for my application so here I can see the services that are being started I have four services and those four services were created by the definitions that are included in this chart this chart has a version it's 0.1.0 so I know exactly what's in there what versions of my services are in that specific version so I can trace back and figure out what's going wrong when things just go bad so that's pretty much helm I totally recommend you to check it out I'm just sharing this because I do believe that helm is a good project but there are different alternatives as well and as soon as you have some kind of like package manager to deal with versions and to easily distribute your applications you are good to go so once you can build your services and your environments then the next step is to talk about application infrastructure and for that I will be talking about crossplane so for this example some of my services require databases for example the proposal service requires a postgresql database and the agenda service requires a redis database to be provisioned and to be available for the service to work and you might end up needing something like Kafka as well for sending messages or events right or an email server or any other thing that must be configured for your service to work and how do you do that in usually right you can use helm as I mentioned before you can just go to vidnami for example and find the redis and the postgresql and the gafka chart and just install that into your kubernetes cluster but that becomes pretty problematic quite soon because you need to maintain these components inside your clusters and you need to have dba's and people that it's trained in both in kubernetes and into the databases in this case to maintain those components running for longer times right so usually unless you are doing development you tend to not install helm charts into kubernetes clusters for application infrastructure you can also use cloud provider specific tools right like you can go to google cloud for example and create postgresql instance on a redis instance and then connect your services to these you know managed databases inside google cloud right that's kind of like okay the problem is that you need to learn about google cloud specifics you need to use their interfaces and or their apis in order to be able to provision these and if you want to provision tons of these then you just need to automate that in some way you can use tools like terraform for this or you can use crossplane which brings it closer to our kubernetes world so crossplane is basically a multi-cloud application infrastructure made easy and how does it work i usually tend to try to explain how how it will go for you if you want to use it so basically you install crossplane into a kubernetes cluster and then you just configure one or more providers like google cloud azure aws and you give the right permissions to these providers so you can just go and publish provision components in in their infrastructure right and then you just basically create kubernetes resources so for example you can create a cloud SQL instance to create a postgresql database inside google cloud you create the object and crossplane is going to be in charge of going and creating that resource for you and the cool thing about crossplane is that it not only creates the resource it also creates for example a secret with the url and the username and password that your services inside kubernetes will need to use in order to connect to that specific instance and at the same time it will keep monitoring and reconciling the definition that you created in a kubernetes object into the actual you know gcp azure or aws infrastructure making sure that if somebody deletes you know that database it's automatically recreated for you and for your application to keep working so let's take a look at that first so as i mentioned before i've installed a crossplane into my cluster and basically i can go and create now a database here in google cloud in sql cloud sql and i don't have any instance but i can choose between my sql postgres sql and sql server and if i choose postgres basically what i will need to define is the name for the database a password version a region and some other parameters right if i click create an instance here i would just have one of these instances with crossplane installed and the gcp provider configured for me i can just use this more kubernetes way right i can create a kubernetes resource called cloud sql instance that comes from the database package in the gcp provider set some parameters for my database to be created and then just send this object to kubernetes right cloud sql instance and basically what this is going to do is it going to be picked up by crossplane the gcp provider that understand the cloud sql instances and if i go back here to my list of instances i should be able to see that my postgres sql database is being provisioned when this is done basically what i'm also going to get is a secret inside the namespace that will contain all the information about how to connect to that specific instance and the username and password and the url that i need to put into my services that wants to connect to it so that's pretty useful but still we can do better than that let's go back to the presentation so basically what just showed you is we transform a form in google cloud into a type into kubernetes and crossplane is going to be able to provision infrastructure for us by just reading these objects that we we send to the kubernetes apis but we can do much more right because crossplane also comes with the concept of compositions which are basically abstractions that we can build for our own domain and we can build these abstractions based on the users that we have right so i can create an object type and notice that this is my type that it's called postgresql instance when i've already defined which version of postgresql do we want to use and the only thing that i'm asking for the user to enter here is the size of the database right i'm also setting up here the name of the secret where i want to that crossplane will use in order to create all the URLs and username and passwords for connecting to that specific instance so in this case i provide like a higher level abstraction of how to provision a postgresql instance that it's not tied to a cloud provider and i can provide different configuration packages on the back for different providers so this postgresql instance i don't know where it's going to be provisioned it will depend on the provider that i have installed and you know it will not change depending on the cloud provider that you're running on right which is again giving you this abstraction this moving you far away from the actual cloud provider that you're using for running this this infrastructure always providing that interface of you define an object and then you just connect by looking into a secret nothing stops you to go one step higher right and create an abstraction and a composition for what a conference is and this interface in this case is just setting up the details of the infrastructure that you need for that conference but you also can specify some other parameters to configure the services that are going to be composing your application i showed you before how to install a helm chart that contains all the services of the application and in this case i'm doing the same but with crossplane so instead of manually going and installing you know a helm chart i am defining a high level abstraction here of what my conference instance is that also includes the infrastructure that i need to run my services and i'm telling crossplane to go create a new namespace install a helm chart provision the infrastructure that i need create the secrets so my services can connect to them and that's pretty cool because i managed to create a big package that allows me to create new instances on demand by just creating kubernetes objects so let's take a look at that okay so i've just deleted the instance that i've created before so i do not have any post-created instance and we can switch back to the terminal to take a look at this conference jammel file that i've created right so i have this definition again just setting up the sizes for the redis database and the postgresql database and what i'm expecting is that as soon as i send this conference instance to crossplane crossplane is going to just run through this composition that basically install the application provision the infrastructure connects everything together and just keeps me that instance for me to access so let's apply this um conference jammel file to crossplane and crossplane just created it so i now can list conferences right conference instances those are my my abstractions that i can just package and distribute as well and you can see that it's not ready yet because of course infrastructure needs to be provisioned and if i refresh here in google cloud i should be able to see that yes so one instance of postgresql is being created i can just go and check as well here in redis i should have a redis instance being provisioned as well there you go and if i go to get the releases for example from helm i have the helm provider here installed in crossplane it just created a release and it's installed the release and it's already deployed which basically means that i have a new namespace call conference here which has my instance if i get the pods there inside the namespace i can see that my application pods are already up and running which is pretty nice regarding kind of like all the automation of course that this is the pods are going to connect to the databases when the databases are ready and they will just keep looking into it in order to be able to be fully function and again i can just check the status by just listing kind of like the conference instances and wait for everything to be ready so i can give access to my users so in these ways we have just obstructed how to create conference instances and we are now at a point where we can say if we got a call that we have a new customer the only thing that i need to do is to create that conference object and at the end of the day i will end up with the namespace plus all the infrastructure that i need to run my services already wired up and provisioned in the right cloud provider this demo can be extended to create the infrastructure for example in AWS or in Azure there is not much change is required only the specific configurations to define which resources inside the cloud providers needs to be created which is just implementing and defining these compositions for using different resources only so let's go back to the presentation okay so let's talk about making your developers life easier with Knative Knative it's a very difficult project to explain and i think that Kelsey put it in the right way here which says if Kubernetes is the electrical grid then Knative is the light switch which is perfectly aligned with the idea of being a set of obstructions and tools to make your life easier when you are developing cloud native applications Knative comes into two flavors two different components that you can use independently Knative serving that provides advanced traffic management and the auto scalers and Knative eventing that brings the concepts of producers and consumers of events to Kubernetes so let's take a look at Knative serving Knative serving allows you to create Knative services so instead of creating Kubernetes services ingresses and deployments you just create a Knative service and Knative service it's going to create all these kind of like dependencies for you what you can do it's for example here like what the example is showing here is like a header-based routing in this case you have the main revision of your application that's the version that you want to expose to all your users expose there like and your users are accessing that that version of the application but you want to introduce another different version for developers to debug certain scenarios and certain issues that can appear right so and in this case what you can do in a Knative service is to define some traffic rules in this case we are saying 100% of the traffic it's going to go to version one but if we include a specific HTTP header when we create the request we are going to route to this second version which is the debug version so let me show you that in action quickly with just a simple example so I have installed Knative serving here in my cluster and I've installed the same application that I showed you before but now using Knative services instead of you know normal Kubernetes services deployments and and ingresses so if I list my Knative services that I can do with that the first thing that you will notice is that you will have a URL to access the application to access the services straight away without the need of an ingress so I can just copy that into you know my browser and I can access the application that I showed you before as I mentioned one of the features that Knative allows you to implement quite easily is to create this header-based routing so one of the things that you can see here if I list all the pods is that I have two revisions of my application one is the normal version that the users are accessing and the other one is the debug version that it's already running for developers to troubleshoot the application and in order to route traffic to this specific version the debug version I can use something like mod header here which is a chrome extension that allows me to set a specific header when I'm doing and when I'm doing the request in this case I'm setting the bug which is the name of the tag that I'm using in the traffic rules and if I refresh the application I will be able to access this debug enabled version of the application that it's running there in the same cluster this allows me to run different configurations of the same service at the same time and then create these traffic rules in order to direct traffic to different places based on different you know policies here you can see where the services are running the node in the Kubernetes cluster where they are running so you can troubleshoot things right and all the services are green which basically means that the application is working correctly so hopefully that just showed you a simple thing that if you want to do only with Kubernetes resources it is pretty hard to do and you will need to rely on something else to to implement this kind of functionality so I totally recommend you to check Knative serving for that when we talk about Knative eventing that's a little bit different so we are talking about producers and consumer of events so basically what I did in the application is I enabled my services to emit events to a Knative broker that it's this kind of router right like and that broker can be again any uh different implementation it can be Kafka it can be Google pops up depending on where you're running and what implementation do you like you can use that and then basically what you need to do is you need to define Knative triggers which are subscription for your events and here it's kind of like how a Knative trigger looks like it's a simple Kubernetes resource where you set you define which uh broker do you need to look for events in this case my broker it's called default and then I'm saying that I want to send all the events that happen in the broker into this URL which is another here like another application that it's consuming these events you can also add filters here so here you can just create some filters and define those filters here in the specification for just if you're interested in certain kind of events you can do that so let's take a look at how that works uh I configure the application already to emit events as I mentioned I have a broker and I define a trigger that's going to send events to you know an external application in this case sockeye which is just an event monitoring so let's take a look at that okay so if I go to the terminal and for example list all the brokers I can see that I have one broker and the broker it's called default and it also provides this URL which basically means that my applications my services inside the application can send a post request with a cloud event into the broker and the broker will receive that event and forward to all the subscribers basically all the triggers uh and also you can see here that I have this new application that it's called sockeye in this URL and that's where I'm just sending all the events that are happening inside the application so if I go to the application this is sockeye basically it doesn't have any cloud event and in order for me to generate cloud events I need to interact with the application right because different services will send different events so for example if I send a new proposal here to the conference and that's me and that's my mail and this is a test I can just interact with the application one service received this request and basically generated an event here that you can see like the proposal received event from the proposal service that was submitted here right and it has the information that I've just submitted in the form right I can keep interacting with the application for example going to the back office and accepting one uh one proposal here so I should be able to see that more events are generated by the application imagine how you can use these to emit events from your applications connect that to a broker and then define the consumers of these events in a declarative way in Kubernetes you do this on configuration and per cluster and that gives you the flexibility of wiring things up depending on what do you have available and depending on what kind of integrations do you want to implement k-native eventing provide these concepts of consumers and producers of events which is pretty important for integrations and for building event-driven architectures so I totally recommend you to check that out as well of course all these examples that I'm showing are in this repository for monoliths gates under k-native and it's step-by-step guides on how to run these examples in your own clusters if you're interested in doing so so check that out so after all these things that I've shown you what do I want for Christmas so first of all I want the place where I can just build and package my software so that pretty much means that I need a Kubernetes cluster that it's dedicated to these things it will have something like tecton installed and all the infrastructure that I needed in order to build and publish all these artifacts already wired up for me right I don't want to spend time configuring a docker registry or a chart repository I just want that to be configured for me already and I want also like service pipeline templates and environment pipeline templates so every time that I need to add a new service into my applications I don't really need to think about creating a new pipeline I can just start from a template that it's doing most of the things for me right same with environments if I want to create a new testing environment I don't want to be thinking about okay how do I create an environment pipeline for that I need a template that I can just use to get things out quickly and if you haven't checked janking sex that's pretty much what it's doing right like it's installing creating a Kubernetes cluster installing tecton in it in different cloud providers and configuring and wiring up all the infrastructure for you right and at the same time providing this you know service pipeline templates and environment pipeline templates which are pretty pretty handy to get you started you just install this and just start adding services and creating environments so you should check this out on the other hand I want a place where I can just create my platform abstractions and that basically means another Kubernetes cluster with crossplane installed and all the cloud providers that I'm planning to support already configured right like all the right credentials in place so I can provision inf like I can provision components into these clusters uh on the right hand here I'm showing you my compositions and my configurations that my infra team can create based on the cloud providers that we are planning to support and on in yellow here basically my customer instance abstraction right like which represents what my customers are going to be able to parameterize send to crossplane so crossplane can provision all the components required for that specific instance that they they want to to start using if you haven't checked out up on cloud that's the company behind crossplane and they are providing a managed service for crossplane basically which is this right like a specific like their own Kubernetes cluster with crossplane installed where you can just go there and configure your credentials for different cloud providers so they can provision infrastructure for you as well so you should check that out because that's pretty cool and finally what I want for my users is to be able to use this self service approach right so I can create a portal where they can just configure these instances that they want to create and they want to provision they will be sending that to crossplane so crossplane can go and create a dedicated Kubernetes cluster for them with all the projects and tools that are required for the application to run in there right something that I haven't showed you before is basically crossplane creating a dedicated cluster for each customer I was creating a namespace before and also the missing piece in here is to make sure that when we create a new cluster we can install the right tools in there like k native or projects like velero as well that are pretty useful to have in every one of these clusters so at the end of the day it should be like the entire journey from the customer defining what they want for this instance crossplane provisioning a new cluster installing the right tools inside that cluster for running the application all the tools that are installed in there are just to make your developers life easier so when they are developing the application they can rely on certain things like k native and then installing the application that you know our customers customers are going to be able to access that should be kind of like the main goal in here and as you can see this requires a bunch of kubernetes clusters clusters that will be created dynamically but the good thing about this is that this is a hundred percent relying on the kubernetes apis it's multi-cloud enable you can run all these tools across multiple cloud providers without any restrictions it's extensible by design all the projects provide extension hooks where you can just add your own abstractions for your users and it's open source and community backed which is pretty important to guarantee that these projects are going to move forward and make things easier for us so that's pretty much it thank you very much for joining my session i recommend you to check out my continuous delivery for kubernetes book that i'm writing just join the early access program there if you have any other questions please feel free to reach out be a twitter at salavoy that's my twitter handle and thank you very much see you next time