 Okay, we are going to get started. Hello and happy Friday to everyone. Thank you so much for joining us today for today's CNCF webinar 12 factors streaming data apps on Kubernetes. I am Jerry Fallon and I will be moderating today's webinar. We would like to welcome our work presenters today, Andrew Stevenson chief technology officer and co founder at lenses.io and Francisco Perez senior back and engineer at lenses.io. We just have a few housekeeping items before we get started. During the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. So please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. So please do not add anything to the chatter questions that are in violation of the code of conduct. Please be respectful of all your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at CNCF.io slash webinars and with that I will hand it over to Andrew and Francisco for today's presentation. Thank you very much. It's a pleasure to be here. So we're going to talk about 12 factor streaming data apps on top of Kubernetes. So who am I? I'm Andrew Stevenson said I'm the CTO of lenses on the left looking there with two glasses, not just one pair of glasses. And we also have Fran Perez, who's the back end engineer, who's going to walk you through the how we actually implemented lenses and its integration with Kubernetes. Well, first, the agenda, right, I'm going to give a brief, a very, very brief overview of lenses and a high level overview of what we mean by a data ops streaming platform. I'll then follow that up by a quick demo just to show us deploying an application from lenses in the side of Kubernetes. I'll hand it over then to Fran who will walk through how we actually built the framework that we use to deploy their intensive applications. So if I start just what we're what we're doing at lenses.io. So every organization is actually now trying to become data driven. So what they're trying to do here is they're actually trying to then reach out to complicated open source technology. This could be Apache Kafka, it could be Apache Pulsar, it could be Kubernetes, right, but all these things are distributed and complex. So what we're trying to do at lenses.io is actually to reduce that pain and the cost and the complexity of it of using this technology to actually build up a modern data streaming platform. So these are some of the examples of what we do in the customs we have, right, we have Babylon Health, so Babylon Health is a unicorn startup in the UK where they're basically their goal is to provide affordable healthcare for everyone on the planet using AI as well. So they use lenses. In their data ops organization to actually focus on that normal deed, right, there's nothing more important than your health so they're using lenses in combination with Kafka and Kubernetes to provide that solution around the world. We also have in an analytical space for Texas, so based of their tracking seaborn freight, again over Kafka and the using lenses to actually get visibility into the application flows. And we also have the Aduno group, they were a customer and transaction payment system in the UK as well and they all want to go real time, and they're all looking towards this open source technology that is hard and painful to operate in some cases. So we do have a commercial offering that is lenses where we integrate with Kafka, these are some of our other clients that we have here who we're allowed to speak about, but we also have an open source contribution as well. So we have a number of open source tooling around mostly Kafka as well. So we have over a million downloads of our fast data dev Docker and this is an all in one Docker that helps developers become productive. And we have a lot of open source connectors to bring data in and out of Kafka, as well as other tooling there. If you want to try lenses, right, you can go to lenses.io to start and you can get the Docker and everything is set up for you, you get the brokers in there, schema registries, lenses, other bits and pieces so you can be productive and really get into building data intensive applications. Or you can head on over to our GitHub site and play around with our code. And if you want to contribute things by connectors, awesome, we'll always welcome pull requests. So what I want to talk about when I'm in a data platform at a very, very high level, right, you've got a source system, it could be your microservices deployed inside of Kubernetes, it could be a massive Oracle database, it could be a mainframe. And you want to get it somewhere so you can build a data product. So you will, but what you need is a buffer in between, which is where in this case Apache Kafka comes in. So it's a buffer that allows you to transport data around to your target system for whatever use case you want. It's actually basically central nervous system. But then you also have to actually deploy apps, right, you've got to deploy your intellectual property, the data landscape on top to actually make use of it otherwise it's just infrastructure that is burning dollars. So what is Kafka? Very, very high level. It's a distribute commit log. What does that mean? Well, when you write to Kafka, you write topics, that's the data set inside of Kafka topics are then split into partitions and distributed across the cluster. And as a producer, you produce data into a topic and each message has an offset. And then as a consumer, you can form consumer groups. And you can actually then just consume the latest records, or you can actually say I want to consume all the records from this offset in this particular partition as well. So that's a very, very high level overview of Kafka. I'm not trying to explain to you in depth what Kafka is just to get a brief, brief primer on it. So where does lenses fit on top of this? It's, or it's actually the part which I call data intensity. Building your data platform, going to get Kafka, I can go to AWS, MSK, I can go to Azure HD Insight. I can go to Ivan, various different providers to get Kafka. The same with Kubernetes. So that's why I called the tech intensity part, but to actually have a successful data platform and to scale that data platform out to get more tenants onto it so you can build data products. You need to have a layer on top, and this is what lenses actually then providing. So we have self service so we can administer all the data sets inside the underlying system such as Kafka or elastic search. We have the discovery of data. It's very important that you get the people with the business knowledge, access and visibility to the data. Let's take COVID, right? I'm not a doctor. I'm a technologist. And if I can't get visibility to the data to the right people, then we don't win. We don't take advantage of all this great open source technology that's out there. You also need a way to process it. Yes, I've got Kafka where my application is going to go, right? So that's where Kubernetes will come in. But then how do I deploy the app and how do I deploy the app and monitor it and be compliant as well? I have to have security and role based governance around all the data sets that I'm touching, especially if I'm working such as Babylon in a healthcare industry as well. So Kafka from let's say MSK or HD Insights, Kubernetes from AWS or Azure or Google, all great, providing me tech intensity, but we're trying to step it up a level and work at the data plane. So if we're not going to talk about deploying Kafka here because we're not talking about how do I deploy Kafka inside of Kubernetes, that's not what we want to focus on. What are we actually deploying, right? So we have things like the data entity, so creation of topics. And for example, data policies, maybe I need to mask that data so I'm in compliance with GDPR, right? I also need my alerting rules as well. But there's the applications and that's what we're focusing on here and that's what Fran's going to go into in details. How do we deploy that app? Which is your intellectual property. That's the bit that's driving the business value. That's your data product. How do you deploy that? Doesn't matter about the infrastructure. I want to swap out Kafka for Apache Pulsar, right? But my data application data is the protagonist. It remains constant. So this is how we do it at a high level with lenses, right? You know, this is a great thing that Kubernetes is actually giving it. If we have a docker and we have a config, right, and we have APIs, we have a UI, we can also do this by GitOps as well. And lenses will deploy that application and monitor it for you. Now, this is more focused this talk towards Kubernetes, but we also have to deploy to other frameworks. For example, we have a lot of clients that are on premise and don't have Kubernetes yet. So we allow people to deploy to Kafka Connect. What's important here as well is the notion that we abstract away connections because deploying applications is just one part of lenses. We also have the ability to fire SQL statements at lenses so you can debug and get that visibility into the systems. So connections are managed centrally and governed as well. Who's allowed to connect to this system is something you want to have above the technology and manage that for the data governance. So this is a more concrete example. This is just basically what I'm going to show you right in the demo now of lenses deploying our own SQL processor. So in this case, you know, we can join, we can filter, we can aggregate and we deploy and monitor them outside of lenses directly into Kubernetes as well. So what I would do is now I would just go to the demo. Let me just change my browser tab. So here it is. This is this lens is hopefully you can see it's right. I think the lenses is just a set of APIs. The UI is one of the clients. We can automate all this in GitOps as well if you want to. But what I want to build is actually my application landscape. This is my application landscape. This is important. But if it's on Kafka, if it's on Pulsar, is that really the interesting part? This is running in Kubernetes. Is it interesting to me as a, let's say a non technologist where it's running? Not really. This is my intellectual property. This is the bit that is generating value. So what we have here is different processes that are deployed. But what I'm going to do one now is quick demo is I'm just going to just refresh the browser because my internet's a little bit flaky today. SQL processes. I'm going to deploy a new one. I want to give it a name. Let's say Andrew demo. And I want, here's my SQL that I have. Right. I'm going to deploy into this Kubernetes cluster. Let's pick a names list. Let's put it in default. And I create the processor and it's now being created. I just need to start it. Start the SQL processor and we deployed our real time SQL processing inside of Kubernetes. You can see here's the pod name, right? And that's it. So I can do this actually as a non technical person. And I don't actually need to know anything about Kubernetes. And this is what we're trying to do as well. You know, for example, we could scale it up as well. Let's say we have a spike. We could scale it up. And now we're telling it to scale out this application. So that's it. Right. I've gone from having nothing to within five seconds deploying an app inside of Kubernetes or config driven. You can do this all by the API. You can put it in version control. You can have the lens of CLI actually apply your desired state. That's also GitOps as well. But more importantly, it's all bound by the role based security that we have inside of lenses. And this is building that data intensity. How do you scale out your platform as well? I couldn't, for example, deploy this if I didn't have the rights. I'm logged in as admin, right, as well. So I can do everything, but I also couldn't deploy SQL processor. Because maybe touching data sets, maybe a topic in Kafka that I'm not allowed to see. Because at the end of the day, it's all about building this topology. And at that point, that's my bit done. So it's quite quick. And I will hand over to Fran to take you through the actual how we built this application. There's my application. I just deployed that Andrew demo, right? Fran, over to you. Thank you, Andrew. Thanks, Andrew. I'm going to share this screen now. So, yeah, this one. Yeah, can you see the screen, right? So, yeah, when we are talking, we are working with streaming data. We have a unique set of new challenges that we need that are coming into the plate. We have an unbounded set of data we need to manage. We have many different sources we need to interact with. But more especially with our security model complicates a lot. And we need to put governance over the access to our data. It's very important to know who is accessing our data. And in the end, that's very, very important. The main difference between real-time applications and, let's say, regular applications or microservices is that when we are talking about microservices or regular applications, everything is self-contained. We have our MySQL instance deployed along our application. We can access it most probably from local hosts. But when we are talking about an streaming application, we need to interact with third-party services and we need to interact with the real world. Our data is out there. We need to bring it in. So, to put one more level of complexity in the game, we are talking about cloud native applications and that was one of our first priorities when we started to talk about this new component, the prior component. Well, is that we need to think on the platform as a new service? It's not that we are going to be full control over our platform. This is what the platform as a service is talking about. We think on this environment where our application is going to be running as an external service and we cannot rely on having specific files in that or having specific services running into that environment. And this concerns those beyond Kubernetes. As I said, this is more a concern about cloud native applications more than Kubernetes. And starting to look into this and trying to bring in best practices in the industry, we started to dig deeper into the 12-factor principles. And we tried to bring some of those principles into our company, into our implementation. If you don't know what these 12 factors are, here is a very brief introduction summary. Basically, it aims to cover all the different matters that you should care about when you are implementing and deploying applications that will run in a pass environment. And these principles, what these principles are coming from us, they were initially drafted by the Heroku team and what with the time it has became a well-known pattern. For the presentation that we are bringing today, we are going to focus on only four of them, even when we are being in some other principles. The main ones that we are going to be covering are the config, debugging services, processes, and especially the one that is very easy is the parity and capacity to move applications around different environments. So how are we approaching this issue in lenses? We split the problem into four different entities, secrets provider, connections, applications, and deployment targets. We are going to describe each of them in detail in the next slides. But before that, I want to introduce one more concept before beside these four entities, and this is the template, a templating system. We realized when we were starting to design the solution that there were some common concerns around those four entities. Additionally, at the end of the day, we realized that they were basically the same thing. They were basically descriptors, but with different meanings. And in addition to this fact or to this situation, we were also influenced by other technologies like help that users are already using templates. But we decided to be that we're on templating system, basically, or most importantly, because we as lenses are also using users of those entities. And we wanted to bring governance and control over them to the lenses users. So the user could control those entities and being security and governance on them. So we ended up having three different templates categories. Yeah, I'm saying three different templates that is in seat of four because at this moment, we are not modeling secrets provider with templates. We are considered considering moving them, but that's not the, that's not where we are at this moment. So we, what we currently have our templates for connections for applications and for deployments. So having these three categories, we can now model a connection with a template, we can model an application with a template. But most important is that we can model an application with a template that has an explicit reference to another template, which is in this case could be a connection. And in that way we can apply governance and control over the connection that those applications can use. And we are also one more detail is that we are also using this connection entities to build our data catalog. So moving forward, let's focus now on the secrets provider. I'm going to use the same diagram that Andrew showed at the beginning, but focusing on each of the different parts that our entities are modeling. As I said, let's start by the secrets provider. I'm starting from this basically the security was from the very beginning one of our top priorities, if not the very first one. We were wondering, where am I going to store my sensitive data. Why not allow cloud security providers to store my data. I don't know if our portal has to call people if our clients are already natural. Why wouldn't they be interested in installing their sensitive information there. So we ended up deciding that we wanted to allow this. Ideally, the way will be to that we would inject reference to references to the secrets. And those references will be resolved by the application at runtime to create the real connection. So in that way, we wouldn't be exposing any secrets all of them to the real application. So that matter about the resolving these secrets properties at runtime we we implemented the reference that reference as a custom protocol containing information around the secrets provider manager that that's a specific secret. More information around the template key and some other information like the data type if it was mounted or not that stuff. So that we still need to create a trusted relationship between the application and the secrets provider so it can application can fetch those details. But the good thing about this model is that either if we inject credentials specific credentials to the secrets provider, or rely on a service account. We won't be able to reject access to any secrets referring to to any of those services, just by restricting a single user account. So that was everything about secrets provider jumping into into connections, we are going to now see what connections are how connections and secrets provider are linked. So when we are talking about connections, what we are referring to is to to the actual details we need to open a connection between the third party service and our application. For instance, if we are talking about Kafka, we need to know the bootstrap server with we have to connect to. And we will consider each of these different properties as a sensitive pieces of information. So as we said before, our template, well, it's, it's, we are treating them as sensitive information basically because it's a, it's a detail that you are going to need to access to your data. So anything that is related to accessing your data, we are considering it as a, as a secret is not only about the user and password, but also about the sort of security protocol you will be using. Yeah, that's why we are treating that way. Well, as we said before, we are modeling these four entities or these three entities with with templates. The first thing is that these connections were the first entity we model with with templates, but we can now have, since it's a category itself, we can have multiple templates for connections, we can have a connection, a Kafka connection template, we can have a Redis connection template, MySQL connection template and so on. And as we can, as we can see in the diagram, we can then have different instances of the same Kafka or Redis connection. We can think of these templates as Java classes and then connections as instances of, and then the connection instances are as, as objects of these of these classes. In the diagram, we are seeing that we have production and development instances of Kafka connection template, but we also have production stain and instances of Redis connection template. In that, in that same diagram, we can see that we are the governance layer we were mentioned before, we are playing security and control over those entities and basically, that's why one of the aspects that the templates are are helping us to to be able to restrict access to full or the whole service just by restricting the access to any instance that was created from from a specific template. Let's see how how connections and secrets provider are are linked. And basically here we have the connection template, saying that for this connection type of connection I need to provide a bootstrap server. I need to provide a protocol, a key pass or all the properties I need to create a connection to Kafka or is just a set of them is an example. But basically, we have a list of properties that we need to to know to open that connection, and then we need to create instances of that template and to to in this case for connections. What we are doing is that we are providing or we are saying who is the secrets provider that is storing our secrets, our sensitive information here in this example we have two different secrets provider we have less and lenses itself, and we also have Azure people. So, for this example we have a Kafka instance of the Kafka for Kafka connection template for development, the name is Kafka depth, and we are saying that all the properties are stored in lenses. So what we are doing is we are creating. We are creating a new entity in an entry into the secrets provider, storing the real value of our of our connection. So, we have now linked both connections with secrets provider we know how we can store where we are storing our properties in a second way. Let's jump into the next entity. We have the applications and with applications. We, we are only referring to what the application itself needs to run with split the split out the application concern from the deployment concern because there are some. When we are talking about the application, we are not saying about what the, the, the underlying platform needs to run your application, we only care about which application, what are the input parameters for your application. And for this matter we have, for instance, in the example of the SQL processor, we have the SQL query we want to run, but also we have the input parameters to open the connection because in the end, the Kafka connection is, is also an input parameter that your application needs. For this matter, we have split it or we have make a difference between in the application templates between direct values and reference values. We were talking before, allows us to create to have references to, to, to other, to other templates so having our application manifest when we are describing an application, sorry, the connection, we're going to say that that specific property is not going to be provided by, by the user is very likely when creating the application, but that is going to come from an instance of a connection of a different template. So, let's say, let's see how we can provide these references I'm going to go through a, an example, and I think that is going to be understood. So, here, in this example, we have what we already saw before, the connections instances, where we have our properties stored. And here we have the Kafka connection templates, these instances having created from it, then we have the new element, which is the application template. The application templates again is exactly the same than the Kafka connection is just a set of properties that we need to fulfill to be able to run our application. But in this case, instead of providing the values for each of those properties, we are mapping the property of the application and which property coming from a different template is providing that value. So, once we have this and we want to create an instance of the application template, the only thing we need to provide is the SQL, which is a direct, let's say a direct property, and the name of the Kafka connection that is going to the Kafka connection instance, that is going to provide the references to all these links we did here. So, the important thing around this is that just by changing this Kafka connection reference here, we are going to be able to very easily change the source of the data we are reading from. So, we are going to be able to promote applications from one environment to another and bringing information from one place to another. And here, this is how it works in the end once we are creating the application, what we are sending, what we are really registering the same when we are creating the instance of the application is that we are fulfilling all the properties that our template application is defining, but the values are coming as a reference to the underlying system where that value is stored. So, when the application is running, at runtime it will be able to extract this value and get it at that moment. And finally, we are going to jump into the deployment entity. And as we said before, we extracted the concerns around application in two different ones, the application with the deployment, basically because this way, abstracting what the underlying platform needs to know to be able to run your application means that we will be able to reuse all the previous steps to deploy the same application with the same connections and with the same security model into many different targets, just moving them as a different entity with a different template. So, we could fulfill those details independently and make the application work and be able to run out there. And yes, as you mentioned, this is not only about Kubernetes, it's also about any possible deployment target. At this moment, we are focusing on Kubernetes and Kafka Connect. And bringing the final overview mixing. There are a couple of more diagrams, but let's say that here is how we are cooking everything, the four different entities at the same time. What we are doing, bringing in the deployment template is that basically we are building an ephemeral visual template, compounding it from two different templates, a deployment application. We have this relationship we already mentioned, the mapping between the application and the connection. We have different instances of those connections. And when we are creating the deployment unit, what we do is we mix the input, sorry, the input parameters coming from the user, we inject also the connection that we are going to use for our application. We validate that we have everything that our template is requiring that we have everything. Once we have validated that, we are able to deploy that application into Kubernetes. So, as soon as this step is done, we have the certainty that that is a deployable unit that we will be able to deploy at any time to Kubernetes. And then once the application is here, is when we are going to be, is when the application is going to really get the real details, the actual details that it's going to use to open the connection between the application and the third party service. Well, as we have been saying that in the almost the whole talk, this, this framework we are building is not only for SQL processors, our application, but it's, it escaped to any type of application. We just need to support new sorts of application. The only thing we need to do is create a new template, a custom template for up for our application. And then, if we want to connect to a set to a service that we don't already have a template, we can create a connection template for it as well. So we are able to create applications were able to create connections were able to model any possible application not only the SQL processors. And the last piece I want to mention is that we are not only focusing on the right side of the spectrum. We are also taking care of the application when once the application is running in the underlying system. So it's not only about deploying, it's also about knowing and monitoring what's happening there. So we have built a set of watches for each of these underlying platforms that we support, and then we translate the information we get from them into our own events via API, and then we are able to manage handle those events and create different views, materialization, start metrics and so on. And these are two very brief tips from our internal documentation. This is the general or the global overview, how we are linking the deployment side with the watchers or the read side, and how we are then creating an an output channel on a stream that contains events that we can use to materialize different views or to start metrics as we already said. Finally, this is just another thing. This is a very specific implementation detail on Kubernetes, how we are doing this injection and this resolution of the secrets properties at runtime. Basically, we are deploying the processor, we are creating an init container that is the one connecting to the third of the secrets provider. It's resolving the secrets and it's creating a file in a shared location between the init container and the main container that will be running the application. But everything is inside the same port, so it's a funeral in the end and won't be persisted anywhere. That's all I have. Andrea, anything you want to, sorry, or I have one more slide, sorry, I forgot about this one. Well, basically, that's the summary, what we are being meaning with all these stuff we have been talking about is that we are enabling deep ops, we are able to move applications around different environments. And especially we are being in governance and security control over the locations where your data is placed, this is stored. Now, yes, this is all I have. Andrea, if you want to add anything. Okay, so I think you covered everything there. I think the important point is that we separate out the connections because that gives us, from a governance point of view, a lot more control. So that's why we also didn't rely on specific Kubernetes technology just like Helm because it doesn't allow us to give us the governance over the connection framework that we need. And connections are also widely used outside of just deploying apps inside of lenses, for example, to build, to build the data catalog. And we are also not just deploying to Kubernetes, Kubernetes is great. But, you know, in five years time, I'm sure there'll be Kubernetes Mark two rights and a different version of it. And as I said at the start the application landscape is the important bit. So if you want to try lenses, you can go to lenses.io slash start, download the box, and we're always interested to hear your feedback. I think that's us now then so we can take any questions that we have. Okay, well thank you both for a wonderful presentation. We have about 15 plus minutes left for any questions so if you have anything you would like to inquire about please drop them into the q amp a box and we will get to as many as we can. Are you guys working. No. You read that. Yeah. No, so we don't so lenses will run across any Kafka it can be any distribution of Kafka on any cloud with any Kubernetes cluster as well. OpenShift for example, or AKS or EKS. The SQL engine we have is our own SQL engine, right you know there's some advantage to that we have the control but like I say, you know maybe you decide that Apache Kafka isn't for you and you want to go to portal, or you want to go to Redis streams for example we can, we have experimental support for that as well. We have anything else. Anyone else at all. Do you be able to elaborate a bit more on your own SQL processor. Yes, so the SQL processor. Basically takes the SQL and at the moment it boils down to a case for a map so case room is a Kafka Apache Kafka connect API for doing the joints and aggregates you can write the Java code if you want to. But our own SQL engine takes that and basically translates it down to a case for application that we then deploy. We can run it to Kubernetes we can deploy inside of Kafka connect. We can run actually inside of lenses as well but that's only one part of the SQL engine. And you know we want to swap Kafka out for other streaming technology. But we also have our own SQL engine that allows you to, we call it snapshot engine where you can take a relational database style query on top of Kafka itself and also on other systems we connect to as part of the log for elastic search for example let's say you want to do Redis so last year I was at the Redis keynote speech showing the SQL engine running on top of Redis streams. Okay. Would anybody else like to ask any questions. How are you enforcing security and governance. So lenses has a rule based security model. So when you log into lenses and you can log in via LDAP active directory. For example single sign on you into a group and then those groups have virtual namespaces in inside them and set of permissions on top of that so we're able for example to provide virtual multi tendency on top of Kafka and that's every API. There's one of that as well. So you can have example create topic permission or view data permission, for example, as well. And then when you're deploying an application, we respect those role based rules. So I can't deploy an application if I don't have the deploy SQL process of rule, and I can't create a processor on a topic that I'm not allowed to see as well. And everything then it is auditing. And it's it's we also have another thing in here because of data policies that we can actually mass the data from with SQL engine as well. So that allows, you know, we have advances with Spank so basically they come in role based security they allow the multi tendency. They have the auditing and they also have obfuscation and all done by the SQL as well. You mentioned data catalog. Can you elaborate on it. Yeah, so Kafka is one of the primary systems we connect to so we see all the topics inside of Kafka and any schemers associated with them as well. But there's more to your data platform than just Kafka right so you maybe want to bring data in and out of elastic search. So we also have the ability to see the data in elastic search, not only the schemers of the indexes, but we can also fire the SQL as well so you can query bound by the same role based security and governance we have with the auditing and the data policies as well. So we're extending that out. For example, what I post great support soon. The idea is that I've got a day catalog, I discovered data, and then I want to build data driven apps from that data catalog as well, really building up that data intensity above the tech intensity that Kubernetes providers. We are using flux with GitOps for our application development deployments. Can you explain for GitOps support about Kafka contact apps. Yes, so everything in lenses is an API. We have lenses CLI so that's for more of a traditional CICD process. But we are working on a GitOps operator in fact, a few weeks ago and even at the Kafka summit. We did a demonstration of this where there's effectively a lenses operator monitoring Git will speak to lenses and have applied the desired state as well. Because we will have the all the knowledge and the control inside of lenses to make sure that we're applying the governance as well. So that is coming it is being worked on. And we'll be available soon. Excellent. Just a quick reminder to everybody. If you have questions, please leave them in the Q&A box at the bottom of your screen and not the chat section, just so that this way I'll make sure that I'll see it when we're doing the questions. This next question here does lenses monitor the real time applications after deploying them. Yes, so this was part of what Fran was describing this. We also have a right in a read side so we're watching the deployments that we monitor to make sure that they remain healthy and you can scale them up and scale them down. If you want to and get the metrics on that as well and we also do that for Kafka connectors and with this framework allows us to actually add in and build that app catalog to go with the data catalog as well so we monitor deploy the whole life cycle with the governance as well. What is some other database from open source are supporting it. I'm not sure what that question is, but what we are doing is we're expanding the data catalog. Maybe I think it's around the data catalog to actually have the visibility into different systems such as Postgre. But also maybe it's moving beyond Kafka, which is certainly something that we're doing as well for example very streams. There's Apache Pulsar as well and we certainly want to bring the same data intensity that we did with the Kafka ecosystem on to those other systems as well. Does it work across any flavor of Kafka like MSK and any flavor of Kubernetes, etc. Yeah, so it's any Kafka. Obviously we want you to be on one of the more recent versions, but it could be MSK, it could be Azure HD Insight, it could be Kafka from Ivan, Instra Cluster, or your own on-premise Apache Kafka. It doesn't matter to us and it's the same with Kubernetes as well. And if you want to try us out in the AWS Marketplace or the HD Insight Marketplace, we're in there, there's CloudFormation templates as well. We also have a portal where we can actually hook and deploy lenses and hook in for example to MSK as well. So can I deploy my own Kafka stream application through your framework? The current version you can't, you can deploy it outside and you can use our SDKs or REST endpoints and register it with lenses and we will monitor it as well. But this framework that Fran has put in place will allow us to deploy anything effectively that's in a docker. Certainly when it comes to Kubernetes or for example maybe Nomad or Azure Container Instances of Fargate, right? Lenses could in theory deploy itself, so in this floor you write your own app, register the template with lenses, build the app catalog, and with a combination of the data catalog then we can deploy the relevant apps to do what you want to do. What's your business objective? Move data from A to B. That's normally it's not standing up applications, so it allows us to actually do this going forward. But you can register your own apps now. For Texas, for example, the example of a customer, they do this a lot. They registered all their Kafka stream applications with lenses and they suddenly got loads of visibility into their application landscape and was able to lift the lid on the black box of what was happening. We are using Flux with GitOps for our application deployments. Is there any plan for GitOps support about Kafka Connect applications? You mentioned Kubernetes mode and Connect mode for SQL processors. If I am correct, could you elaborate a bit more and explain which is more scalable? So you can actually, we're working on the GitOps operator for lenses. Now that operator could run inside of Kubernetes or outside of Kubernetes. What it's doing is it's telling lenses to deploy an application and lenses may be configured to deploy it into Kubernetes, so it may be configured to deploy it into Kafka Connect. It makes no difference, so it will work across them both. This is why we have the separate templates one for deployment as well. So Kubernetes is a deployment temporary Kafka Connect is a deployment temporary and we can choose now which is more scalable Kafka Connect or Kubernetes. Well, it's Kubernetes, right? Kafka Connect doesn't really handle multi-tenancy that well. By using Kafka Connect and Kubernetes, we can create a effectively Kafka Connect instance cluster per instance and get a lot more multi-tenancy in there. For example, that if you add another connector, you don't maybe disrupt the flow of another instance that's deployed there. So we see this quite a lot that with Connect, you have to have end up with a lot of clusters to manage and Kubernetes is a better way to scale it out. But we provide all the governance around that. So whether it's Kafka Connect or Kubernetes is your deployment targets, right? You can't deploy unless you have the correct permission models as well and we order all that. You mentioned a, excuse me, you mentioned a role-based access control system. Is there support for a single sign-on, for example, with Okta or any others? Yeah, so we support single sign-on and we have Okta, one login, key cloak, Azure, single sign-on as well. Then that's in addition to the basic auth that we have Kerberos, LDAP, Active Directory as well. Can I easily try out what you presented today on IE, my laptop? Yeah, so if you go to lenser.io slash start, right, you can download the Docker. We have fast data dev in the lenses box and you can download it all in one Docker. It has data generators in there, everything to get going and it's free for developers to use. Are you offering a managed solution? We will soon have a managed solution for lenses, but we don't do it for Kafka because there's great offerings out there already for Kafka and the same with Kubernetes, but we will hook into them as well. Okay. Is anybody else? I was just going to finish off really on that, so you can deploy lenses anywhere, so you can deploy it in the cloud, you can deploy it on an EC2 instant, you can deploy lenses inside of Kubernetes, so we have HelmChop for that as well. Or you can deploy it on-prem as well, that's not a problem. And we also have a portal as well, which gives you an aggregated view of all your different lenses, instances that are deployed. For example, we have some large retail clients in the US that have three to 350 deployments of lenses as well. Okay. What about distributed tracing for Kafka streaming apps? We don't have that at the moment, we're looking at how we can integrate it. Okay. Just another heads up everyone, if you have any questions, please drop them into the Q&A box just so we can keep all the questions in there separate from the chat, that'd be great. Does the application deployment work only with Java or Scala applications? Would it work with .NET apps, for example? So if you're deploying to Kafka Connect, it would have to be a JVM app because Kafka Connect is JVM. If you're deploying to Kubernetes as well, we've abstracted away, you know, you put an app inside a docker, it's docker plus config, and then lenses will be able to deploy and manage that bound by the security and governance, as Fran explained. So yes, you could write a Python app, you could write a .NET app, register the template with lenses, it's part of the app catalog, we can then deploy and manage those effectively docker instances, right? But it could be bash inside, right? Do we have anyone else who would like to ask a question? We've got one minute left. Alright, well, no one knows anything else they would like to ask, I guess we will wrap it up. I want to thank our presenters, Andrew and Fran for a wonderful presentation today and for a great Q&A facilitation, and I would like to thank everyone for joining us today. As I said before, the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And that being said, we will wrap it up. Thank you everyone for joining us, everyone take care, stay safe, have a wonderful weekend, and we will see you next time. Okay, thank you.