 So, good morning everybody. My name is Sebastian Koesgen. Is it too loud? No? Okay. That's the last talk, I guess, before lunch and that's only the second talk of the serverless track. So, I think, you know, thanks to all the members of the serverless working group. I think they've done a great job and thanks for the setup, you know, to talk about serverless. So, I work at Bitnami. I'm the Kubernetes lead there for all the things that we do relate to Kubernetes. We work a lot upstream and we contribute quite a bit to the open source ecosystem. I was actually at the first KubeCon in San Francisco and it was a little bit bigger than this in the hotel. It was like 400 people. So, it's actually very overwhelming to see 4,000 people in the keynote the last few days. But I want to talk about building serverless application pipelines. Okay. And I'm not going to, you know, go deep into introducing, you know, what I've done before, but stop by the booth at 4.30. I actually co-wrote the Kubernetes Kubebook for Riley. We have 25 galley copy unedited. So, stop by the booth and get your copy of the Kubernetes Kubebook. Okay. And that will be the free swag. So, building application pipelines. Everybody has its own definition of serverless and its own reason of why they want to use serverless. For me, the main reason is really about building apps and building distributed apps. So, that's why, you know, I want to talk about today and also explain, you know, how we use KubeLess to be able to build those application pipelines. So, what is serverless? Well, the working group has done an amazing job. You know, definitely you should go read the white paper. So, the CNCF serverless working group has defined serverless like this. You know, computing refers to the concept of building and running apps that do not require server management, basically. And it's really a fine-grain, you know, way of deploying business logic and then monitoring that little business logic and paying for just those calls of that business logic. So, it's a, you know, very nice way to go really fine-grain in writing your app and then, you know, paying and deploying it. Of course, you know, there are servers. So, that's an old tweet from last year, actually, from the serverless conference in Austin. There's no serverless. It's just someone else fully managed execution environment that I only pay a fraction of a cent for whenever my function is run. So the, you know, I know some of the AWS guys were quite vocal into, you know, what is serverless and what is not serverless and what is fast. And their point was, you know, if you're in a public cloud and you're a client, a customer of the public cloud and you're deploying functions, you don't have to manage those servers, so it's serverless. If you're using something like Cubeless or Fission, you know, one of the solutions that you can deploy on-prem, you still have to manage those servers, so it's not serverless. It's fast, okay? But personally, you know, spending a lot of time on the definition, I think we lose time actually trying to figure out, you know, what is this new paradigm that we have to actually build apps. So let's figure out, you know, how we build apps. I totally stole those little diagrams from Lambda because, you know, Lambda is actually arguably these days, you know, again, the leader in serverless out there in the public cloud. So the way they explain Lambda is basically you have your little business logic, your little bit of code, and then you upload that through the CLI, and then that bit of code is triggered by something. That's a major point, I think, when we are talking about those application pipelines, that we have those event sources that were mentioned in the previous talk. So we have those event sources somewhere in our system and it's those event sources that are going to trigger those methods, right, those functions. So in Lambda you have those little functions, you bundle them in a zip file, you upload them to AWS, you define what triggers those calls, and then, you know, basically they are going to be run on demand and you get paid for, you know, the number of times you call them. I'll just, you know, bounce back on a question about, you know, is serverless actually autoscaling or because, you know, there is this time of starting the app and so on. So at the beginning of serverless I think there was lots of people that were saying, well, what's your cold start? What's your warm-up start and so on. And then, you know, six months ago we actually started seeing users of Lambda trying to optimize this time and they were faking calls to functions so that their Lambda's would stay up. So they were doing pre-warm-up, okay, and they were keeping their warm. So you're kind of defeating the purpose of saying, you know, my function is going to go up and down really quickly and I'm only paying for, you know, what I use. So we have to take all of this with a grain of salt when we say, well, are you shutting down the function and how fast are you provisioning it because depending on your app and the traffic of your events you may actually have to keep their warm, keep them warm. Yeah, French accent. So AWS CLI, it looks like this. Lambda, create function, region, give a function name, specify a zip file, a runtime, which is the language and so on. Google Cloud Function is very similar. Azure actually haven't tried, but, you know, Microsoft has Azure Functions, OpenWisk also. So all those solutions have, of course, a CLI and I'll get back to it because Kelsey this morning totally inspired me after two minutes. He said something which is, you know, we don't want to use the CLI and I was like, ah, he's right. We don't want to use the CLI because I was preparing for demos for this and I was like, this CLI is really long. There is no way I'm going to demo this, right? Even with Q-Bless, if I have to type all of this, it's going to be super boring. But that's basically AWS. So if you're trying to build those apps, what serverless solutions do you have out there? You know, you have Lambda, you have Google Cloud Functions, Azure, IBM OpenWisk, I should say Apache OpenWisk, actually. You have FN from Oracle, OpenFaz, and then, you know, BitNami, we're pushing Q-Bless, which is fully open source, you know, out there. So I'm going to talk about Q-Bless and show it to you. So who is using Q-Bless? I'm super excited today to say that, you know, SAP is evaluating and planning to contribute to Q-Bless. That's in bold and that's exactly the sentence that the lawyers allowed us to put on the slide. I hope that the next Q-Con, you know, we'll have some more. And then I'm also super excited because BlackRock is here today and they are users of Q-Bless. That's also what was allowed by the corporate communication team, but we have Arjun here, Rao, and he's going to join me at the end of the talk to talk a little bit about how they use Q-Bless at BlackRock in their Kubernetes cluster. So I'm pretty stoked about what's happening with those, you know, real enterprise users. So, you know, we see Lambda, and Q-Bless started last year after Q-Con where people were talking a lot about serverless and I was like, well, we have Kubernetes. Kubernetes is the perfect platform to build systems on top of, okay? As a super-rich API, it's extensible. You know, if we are going serverless right away, we should actually be able to build a serverless solution on top of Kubernetes, okay? So that's what Q-Bless is. It's a really Kubernetes-native serverless solution. What I mean by Kubernetes-native is that we actually extend in Kubernetes. It's not a solution that we have containerized and that we deploy on top of Kubernetes. It's an extension of Kubernetes. And Hen Goldberg earlier in the keynote mentioned it as an extension. Who is familiar with custom resource definitions? There you go. So Q-Bless is using custom resource definition and it's a controller. It's actually pretty simple when you think about it. But we use the Kubernetes API server. We don't have our own API server, so we reuse the Kubernetes API server. So when you have this CLI where you're saying, hey, deploy function, you're actually talking to the Kubernetes API server directly. We reuse Kubernetes API objects. So everything we do with our function, it's a deployment, we're using services, we're using config maps, we reuse ingress to basically have some sort of API gateway to the functions. For the famous autoscaling of the functions, we use the Kubernetes horizontal pod autoscaler automatically. For monitoring of the functions, we use Prometheus, so CNCF project. And then there was also a good question earlier. I didn't want to cut the conversation. But in the future, next six months, we're going to use Istio or Envoy directly to basically provide traffic encryption between functions. It's a natural addition to Q-Bless so that we have this very nice traffic shaping, routing between functions and then encryption and also distributed tracing. So all of those CNCF project marry very well. And especially with Q-Bless, which is an extension of Kubernetes. What does it mean to extend Kubernetes? It means that we're using this object, which is called a custom resource definition, a CRD. It used to be called a third-party resource, but now it's since 1.8, it's a CRD. With CRD, what you can do is you can actually define a new custom object and the Kubernetes API is going to create a new REST endpoint. So it's actually magic. The first time I heard about it, I was like, what is this? It was a bit weird, but you try it. It takes really two minutes to try. You create a new custom resource definition. Let's say you want Kubernetes to manage bananas. You create a custom resource definition for bananas. And then automatically, your Qubectl knows bananas. So you do Qubectl getBanana, and it returns things. So CRDs are super powerful. So that's what we've done with Q-Bless. We do a custom resource definition. We've defined a function object. And now, suddenly, Kubernetes is aware of function. Qubectl get functions. You get functions back. Of course, using a CRD and defining a CRD in itself in Kubernetes, it's a five-minute exercise. You have to think about your spec, but it's actually quite easy. The hard part is to write the controller. So you need to write some code that's going to watch this new REST endpoint. And then when you have updates on this REST endpoint, you actually do things. So we create the function. And then we have the controller that's saying, oh, there's a new function being created. Now what do I do? Well, what do you do? You create a deployment, which will then create pods and then those pods will get the function injected in them. That's where there is a little bit of magic of injecting the function. How do you inject the function dynamically inside a pod? And we use config map. To be honest, it's a little bit of a hack. I mean, you have the little script. You have your business logic. The controller is going to stick that business logic in a config map, and then you just do a volume mount. So it's a little bit of a hack. We are working on actually having a much cleaner built pipeline so that we can also handle dependencies of the functions properly. Right now, we handle dependencies, but a little bit of a suboptimal way. Now, if you're trying to write a controller, there are now lots of good documentation out there, actually also from Bitnami. But pay attention to the Google Project Cube Meta Controller. It's still totally beta, but it's supposed to be a framework to help you write controllers. So it's just a little shout-out. So keep an eye on Cube Meta Controller and definitely the concept of extending Kubernetes with CRDs and writing controllers. How do we do monitoring? So that's where I like Prometheus because Prometheus has a client so you can instrument all your business code, all your code directly with the Prometheus client. We're not monitoring from the outside in the function. We're actually monitoring the function itself. So if you peek at the code for our runtime, you'll see that we use the Prometheus client in Python, in Ruby or whatever to actually be able to measure directly at the pod level the number of function calls, the number of failures, the latency of the function calls and so on. Once we do this, Prometheus is able to scrape all of those metrics and return everything back. So we built Qt Dashboards. So you end up with a dashboard like this. That's just the basic example. You get number of function call rates. You get failures, of course, zero failures. And then duration of the execution. But the point, dashboards are Qt and the managers, they love the dashboard, but really why we were doing this instrumentation of the runtime is to be able to give you auto-scaling, because that's key for serverless. We need to have a way to scale those functions automatically. And of course, Kubernetes has auto-scaling built in. Again, Qubeless is Kubernetes native, so we reuse everything. So how do you set up auto-scaling in Kubernetes? By default, it's going to auto-scale based on CPU. But you can do auto-scaling based on custom metrics. So since we have instrumented our runtime, our controller can now create an HPA automatically, which is going to use the number of requests per second to be able to scale based on QPS. So you have auto-scaling out of it. It's a little bit tricky because configuring Kubernetes right now to use custom metrics is a little bit of work. And then the last bit that we wanted to do is that serverless is not a fad. If you go to a serverless conference, you will see that Lambda and Google Cloud functions and so on actually has lots of users and already production users, companies in production with serverless. Interestingly, what most of those users are using is this framework, which is called serverless framework. It's very confusing. Serverless.com is a startup that has created the serverless.com. It's written in Node.js, but their Twitter handle is go serverless. But they've done a really amazing job basically creating a spec common to all the... Well, it's not exactly a spec, it's an interface common to all the Cloud function providers. So most of the users that do Lambda and Cloud functions and so on, they use this framework to deploy their functions. So Q-Bless, we've created the fifth provider of the serverless framework. So I haven't really talked about application pipelines. Okay, I got time. But I felt like it was important to explain the system so that you see how is Q-Bless are created, so that you understand it. And the strength of being Kubernetes native is that since you are now operating your Kubernetes clusters, you install Q-Bless. If something goes wrong or you need to manage it, those are just pods. They're deployments, they're services, ingress rules, HPAs, you know how to take care of those. But now let's talk about applications. You know, for me, the key with serverless was to build, to compose an application made of multiple services. It's almost like, you know, mashup, you know, 10.0 or whatever. So back in, so when was that, 98 or whatever, when there was mashups. But here we are trying to build, you know, a mashup of a database service and then an archive service and then, you know, a stream, event streaming service. And we are trying to compose all those services together to build a bigger app, okay? Scalable, distributed, and so on. So how can you build this app? If you try to go from scratch, it's going to be, like, super complicated to make it happen. But we need to have a way to basically say, hey, provision this database, provision this streaming service, this archiving, this object store, and so on. And then write this business logic to be able to tie everything together, okay? So in some ways, Serverless is about stitching services together. So again, we go back to Lambda and the examples that they have are, you know, fairly straightforward, but they have quite a few of them. And the philosophy that we have adopted with Cubeless is to actually try to enable all those pipelines that Lambda is talking about. So that's a fairly simple one. You have pictures, you put them in buckets, you know, in S3 buckets, and as soon as the image is in the bucket, it triggers a function that generates a thumbnail, thumbnail creator, okay? So it's pretty easy. So that type of app. Another type of app is that you have a Kinesis stream, of course, so you have, like, you know, Twitter stream or events coming, you know, from your enterprise, and then you're calling Lambda when there are events in those streams and sticking everything into a database, okay? Those are, you know, fairly simple, but if you're trying to build that from scratch, it could be actually quite daunting. So Cubeless allows you to do this. And, you know, the vision was, in Kubernetes, if we need to deploy something that looks like Kinesis or something that looks like DynamoDB, you know, we can probably reuse an application that's been created by somebody else, okay? And right now, there's something that's quite useful. It may evolve, but right now it's quite useful. We have Helm charts, okay? So if you want a database, or if you want an Elasticsearch cluster, or if you want a, you know, Kafka cluster, you can actually say Helm install Elasticsearch, and you'll get your Elasticsearch cluster. Helm install by SQL, you'll get MySQL, okay? So imagine this app, and you have services, services, services, and by service, I mean, you know, a big app. Helm install those bits, and you could deploy them a different way, but I'm just using Helm as an example. So you install all of those separately with Helm, and now you write your business logic with Qbless, okay? So Kinesis, DynamoDB, Helm install, and then, you know, the lambda in the middle, you know, Qbless. So those are actually quite hard to demo. I have to be honest, because they get kind of involved in doing all the deployment, you know, in five minutes and so on. So I invite you to go to our function store, which is, you know, github, github.com, Qbless functions, and you'll see some interesting pipelines there, like, you know... One that I like is like an OCR pipeline, Optical Character Recognition. Drop an image in a Minio object store. The image is in your Minio object store. Minio can send events to a Kafka message broker, okay? You receive that, and then send that to Apache Tikka, which does OCR, and then once the OCR is done, you store that in MongoDB, okay? So three services. Minio object store, what do you say? MongoDB and Apache Tikka. So those three services, you can deploy them with Helm or, you know, another way. That's what we show in this repo here. And then you write a function to do the OCR pipeline, okay? It actually works, you know, very well, and it's quite, you know, it's quite impressive. The other example that you can find on there that I really love is be able to stream database events. So let's say you have a MySQL database and you do insert, delete, whatever, and then you want to get a message where every time there is an insert or a delete or an update, right? So you can use a project from Red Hat, which is Debisium, so look at Debisium. Basically, it installs MySQL but connected to Kafka, okay? So when you do an insert, it will then, you know, send an image to a Kafka topic, and then this Kafka topic, this message on the Kafka topic can trigger your function, okay? So we have an example where, you know, you do insert, delete, whatever, and then it streams that to Slack, okay? So, I mean, probably not mega useful to stream that to Slack, but you get the idea of this stream. And the reason why, from the start, we went with, so Cubeless underneath also has a little Kafka broker for development. So when you install Cubeless, you get the working Kafka setup. The reason why we went with Kafka is that Kafka has a set of plugins, Kafka Connect, and you can, you know, basically stream from a database or stream from other types of services, so it's kind of handy. And also because the enterprise, you know, is using Kafka a lot. Okay, so those are a little bit complicated to demo, but I thought, you know, I cannot do this talk without giving you a demo. So you guys want a demo? Yeah, okay, cool. So shout out to those guys, Katakoda. You know Katakoda? Yeah? No? You don't know Katakoda? Okay, so you have to check them out. Team Katakoda, you get lots of online scenarios to learn things. So it's kind of handy. I'm going to try to make that bigger. So we just worked on... Can you see that in the back? I don't think I can make that. Okay. So it's all, you know, it's all online. You could get to it. It's katakoda.com slash kubeless slash scenarios slash getting started. So here what's happening is that there is a VM somewhere in a cloud being created or that has been already created. It's running this script automatically when you start the scenario and it's just setting up Kubernetes. There you go. So we have a Kubernetes cluster and it's in my browser, but I have a, you know, I basically have a shell. To get nodes, I have, you know, one node basically. Can I clear the screen? Here you go. So to install kubeless, you create a namespace kubeless and then you just, you know, kubectl create this stuff here. So what's happening here is that we're just getting a manifest from GitHub. So when we release kubeless, we create a manifest that contains everything that kubeless needs to run. And then we pipe that through kubectl create. It's actually a very nice way to deploy stuff. And then if I look at my pods after having, you know, done this kubectl create, you should see that I have a controller here that's pending and you see my little Kafka setup with ZooKeeper. This is not meant for production. This is just so that by default when you install kubeless, you have an event, you know, message broker so that if you deploy services that can send events to Kafka, you can start building those pipelines. So Minio, the object store, for example, you can configure it to push notification to Kafka. So you see that now my controller is running so I click continue. And here is the CLI that I didn't want to type so that's why I'm showing you the cataclysm stuff. So kubeless function deploy dash run time so we're deploying a Python function handler from file. That's where you actually have your file. So if I do more, more toy, I have like the, you know, really silly function. It's just doing an echo of the event. And this CLI, we started by mimicking 100% the Google Cloud Functions CLI and then actually we started, you know, doing a mistake. We started mixing it with a little bit of AWS so it's not, you know, we need to clean that up at some point. So here, let me do this. You know, I deploy, it's deployed. Now I do kubeless function LS and it's telling me, okay, so I need to kubeless function LS. There you go. So it's telling me that the function is deployed and then what has happened underneath, you know, is that the controller has detected, the controller has detected the function and then created a deployment, created a service, created the config map, injected the script inside the stuff. If we want to look at the, you know, get custom resource definitions. Yeah, so you see functions and you could actually do everything through kubectl because kubectl is now aware of functions and kubionitis now has a function object so you could write a manifest for your function. So if you don't like the kubeless CLI, okay, no offense taken, just use kubectl. Okay, so let's look at it. Now it's running, great. And now we can call the function. So great, hello world is returned. So function called toy, you pass little Jason and it returns hello world, okay? Great, that's the very basic. And that's fun, right? It's triggered by CTP, it's cool, but actually the most interesting part is to trigger this function through events, through Kafka or potentially things like Nats or RabbitMQ and things like this. You can get the logs of the event, kubeless function logs, that's basically kubectl logs. You can describe the function, you get the function object. You can also update on the fly, you know, of course. So I'm going to skip the update and you could do Node.js, of course, we support Ruby and .microsoft-contributed.net core. So you could deploy Ruby if you wanted to. So I'm going, oh no. So I'm going to skip because it looks like you guys are following and you get the idea. One point I want to make is that we can also deploy custom runtime. Okay, that means that we have our built-in runtime, but if you bring your own runtime, which has your function already packaged in your Docker image, we can run that, you know, runtime. And it's an interesting discussion to have. We may not have time here, but basically as long as you have a Docker container that exposes a function over HTTP on port 8080, then you have a function. And that's more or less what OpenFast does. So here, if you look at this line, kubeless function deploy, I'm going to actually launch an OpenFast function through kubeless. I'm just specifying one of their function, which is the markdown renderer, right? So I deploy this. If I look at my pods, you know, my pod is going to... Okay, so now the function is running. And now I call that OpenFast function through my kubeless CLI. And it's just like, you know, a small, like they call it markdown render. It doesn't matter really what the function does for this purpose, but the point is that any OpenFast function runs in this because an OpenFast function is really just a container. Okay? One demo that I wanted to show you is actually about those events, right? Because that's the key for building those pipelines. And I was talking with one of the AWS heroes, Ben Kehoe from iRobotics. I was like, wow, you really need to integrate with SQS, the AWS queuing system. So here I'm on my AWS console. I've created a 5.4 queue on SQS, okay? And I'm going to publish events into that queue, okay? And I'm going to go to my terminal and I'm going to show you the manifest of a function. So here is the manifest of a function. So you see my function, my business logic is actually here, like in the middle, function. But then this is actually a Kubernetes manifest. This is a function object. So ks.iov1, kind function, metadata. I have some dependencies that are actually specified. So the pod is actually going to pre-install some dependencies. And then the code actually talks to the Kubernetes API to get some secrets. So here I'm going to get some secrets, some Twitter keys. So I'm going to basically try to publish an image in my AWS SQS console. I'm going to publish a message. And then this function is going to send that message to my Twitter stream, okay? Make sense? So to be able to publish to my Twitter stream, I need this function to be able to load my, you know, Twitter secrets. And then, you know, obviously that's the one that I shouldn't have showed you, okay? Test. There you go. There is something really hackish about this because I did this last week, which is that to be able to talk to SQS, you need to have your AWS keys. Which I put as environmental variable. This is bad, okay? So I'll have to ask the, well, okay, I can just revoke the keys after this. That's fine. So this is bad, but it brings an interesting point about authentication of the function which was asked before. So here what we want to do is be able to use Qube2IM. We want to use an IM profile so that, you know, you have proper authentication. And we need authentication in different ways. We need authentication to who calls the function and we need to be able to give authentication to what the function can do, okay? But Kubernetes can do this on Google and on AWS. So we can use the instance metadata to get the IM profile and so on. So long story. Let's do Qube CTL. I'm going to create the function now, not through the Qubeless CLI, but through the QubeControl command because I just have written a function manifest, right? Okay. Qube CTL get pods. So you see that it's actually running an init container. That's something we need to change to be able to reduce the startup time. And then I'm going to go here and then what I'm going to do here from the console is I can do send message, hey, QubeCon rocks and then we need to put some message duplication, okay? So the message is ready to being sent. Is the pod running? The pod is running. So that pod should be receiving my message from my SQS queue because it's listening to my SQS queue. Our runtime automatically does this. Our runtime automatically can connect to SQS and get messages. It's basically an SQS trigger. Okay, so we go here and then somebody checks my Twitter timeline. Okay, should be QubeCon rocks. That's what I sent. Did it go through? QubeCon rocks sent. Woo! So it's just to give you an idea of what we can do, right? We can have all those event sources that basically can trigger functions and then compose services. So it's a fun little exercise but we can build much more complicated pipeline. I'm going to skip the... Oh, what's going on? There you go. I'm going to skip the last... Okay, I did the SQS demo. I'm going to skip this one but I mean, I could show it to you, okay? The idea here is that Kelsey this morning said we don't want to do things through a CLI and I agree with you. We want to automate things. We want to do things directly in version control. And I came out and I was like, yes, it's totally right but in Qubeless I can write a Kubernetes manifest. How do I keep basically updating that manifest when I change the code of my function in a very declarative manner? Well, it's very simple. You stick your function... Well, very simple. No, sorry. You stick your function in version control and then you start doing pre-request and updating that code. And now what's going to update that function? Well, I have a little cron job, a Kubernetes cron job that every minute is going to get pulled my repo with my function code, sees the new code and then does a Qubectl apply of the function manifest. It just works. So I have a CD pipeline for functions thanks to Kubernetes cron job and Qubeless. Okay? It works. It's super cool and super small. So now we have two, three minutes and I'd like to have Arjun come here and then he's going to tell us a little bit about how they use Qubeless at BlackRock. I'm just going to hang on to this. First off, thank you, Sebastian, for letting us speak here. This is an awesome audience and we are very excited to be here. My name is Arjun Rao and I am from BlackRock. For those of you who do not know BlackRock, we are the world's largest asset manager and we manage nearly $6 trillion in assets under management. I am the engineering lead of serverless compute at BlackRock. At BlackRock, we have a very rigorous culture of questioning our assumptions and basically evaluating every line of code in order to make sure that we are driving innovation on behalf of our clients and our use of Qubeless is just another example of this. My team personally builds products for portfolio managers and investor research teams so that they are able to search and discover data sets which they can use to power their investment research. Now, in order to have an effective search engine, it's also very essential to have a rich index that is easily searchable. One of the ways that you can have a rich index is by enriching any kind of data that exists in this index. We build our index using the metadata that is associated with these data sets so it's only natural that we find a way to enhance the metadata that exists as part of these data sets. In order to enhance this metadata, we need our developers to have a way to plug in scalable and customizable functions that they can use to translate this metadata. This is where Qubeless comes in. What Qubeless lets our developers do is basically apply their business logic at any part of the data processing pipeline. The declarative mechanism of defining Qubeless functions frees them from having to worry about how to deploy functions, how to orchestrate functions, and any of those nightmares that developers have to face, and they can solely focus on business logic and basically building out a more rich and expressive index that can be used to evaluate results sets that can be sent back to the users who are looking for these data sets. We are very excited to be active users of Qubeless, and thank you so much for your time. Thanks, Arjun, yeah, pretty excited too. So thanks a lot. Please reach out to us if you have any questions. We're just on time, we have a couple of minutes left. Check out Qubeless and Qbabs also from Bitnami and 430, come get a book, 25 books only. Thank you.