 Okay, hello everyone. This is the first time I'm actually doing a webinar on Twitch, so I hope you can all hear me. So I was thinking, taking a few minutes to talk about something that we've been working on a lot in the last years at Lightben, but also something that I've been thinking about a lot, you know, where we both a little bit where we are today, but also some thoughts where we would like to go in the future. And it involves a lot around Cloud and Edge and what we're doing with Calyx, but also most of these things do apply to Acre as well. So this session is titled, Tackling the Cloud Edge Continuum. And hopefully you will get some more clarity what I mean by that throughout this talk. So my name is Jonas Poneer. I'm the CEO and founder of Lightben. I'm also the creator of Acre based in Sweden. So let's get started. You know, today's infrastructure, in general, the cloud infrastructure is really amazing. It's so good that it's almost, you know, we've been spoiled by all this great cloud infrastructure, I think, to the point where we're almost taking it for granted. It's almost hard to remember how it was about 10, 12 years back or 15 years back, you know, when we just started tinkering with the cloud, how hard it actually was and how much work we had to do and reinvent the wheel over and over again. You know, I'm talking about the world before containers and virtualization and Kubernetes and the whole ecosystem around Kubernetes. I remember, you know, when I started Acre in 2008, 2009, the first years back then in 2009, 2010, we had all the work we had to do to make it, you know, Acre run efficiently on on-prem and also in AWS for clients and the work that every client had had to redo over and over again. So it's very, you know, it's definitely the state of the art is amazing when it comes to cloud infrastructure. I don't think it's all good, you know, it's like, it's been, it's been also becoming very, very complex, like there are almost too much of the good stuff. There are too many good products, you know, there's too much decisions that we're all drowning in in order to ensure, you know, that we take, you know, that we do what's best for the specific app or the specific use case or the specific customer. So it's, you know, the options that we are all facing can sometimes be completely overwhelming and almost be like a blocking factor when it comes to decisions and moving forward. This image is actually directly from the CNCF website and it shows the vast ecosystem that we have now under, you know, under CNCF, which is completely staggering, you know, but at the same time, how could someone navigate this efficiently? Do I need to learn about all these products? And in order to choose which ones to use? And once I've chosen, you know, I don't know, five, ten or whatever is needed, three to seven, I don't know, you know, how do I make sure that they're all worked together? How can I compose them? And as most people that's been, you know, including myself, you know, learned the hard way that when you're composing sort of, you know, sort of distinct subsystems or products into a larger cohesive whole in one single system, it's usually at the edges that things break. How can we ensure that our SLAs are kept when we move from one product to another and compose them? How can we ensure that, you know, data integrity and consistency and all these hard things are maintained for us? And that's really, really hard. And the answer probably differs depending on which of these products you are putting together, so to speak. So as a matter of fact, many of us are stuck in maintaining, in our first building and maintaining a non-trivial application stack ourselves. We should include, you know, in the simplest case often, a low balancer, Ingress router, some sort of API gateway, caches, we usually built our system in some sort of app framework for microservices. And often we need some eventing as well, which means that we need like an event broker, some sort of message broker, we need to put it all in a database. And often you different type of use cases means different type of databases to make sure that we optimize data, you know, storage and queries fully depending on the use case. And all of this is, of course, very, very hard. You know, most applications then sort of need to have fully staffed operations and, you know, 24 by seven, making sure that all of this continues to function and takes forward. You know, I guess I was really excited when I first learned about serverless, you know, Lambda, Amazon Lambda, I think, was really revolutionary when it came out. It was really pointed towards the future, you know, for a new better world in a way. And, but for me a serverless has always been a developer experience, you know, I think it's, it's too good, it's too revolutionary, it's too few forward thinking to be through only sort of bundled together or we're used in one type of product, you know, function as a service where it originated. I think, I think, I think, honestly, I think it's the, it's the future of how all cloud applications will be written and also as we'll talk about how I think edge will be, how people will develop for the edge as well. It's really the future of the way we will write and consume software, I think. So it's way bigger than the function as a service, which is, which I'm very grateful for, you know, as the being the sort of the starting point for it. Serverless really means, at least for me, you know, it means a lot of things, but one thing, one of the main sort of trace of serverless is serops, meaning, meaning, meaning that you that you don't have to carry the burden of running and operating the code. But all you need to do, you know, if, if, if serops lives up to his promise is to focus on building the code and committing it, you know, to wherever positive or like throw it up into the cloud into whatever platform you are running. And, and, you know, as I said, again, fast really showed the way here. But in a way gets stuck halfway, you know, it's addressed communication and workflow and these like data data pipelining type of use cases, very, very well. But in a more stateless manner, the definitely is a big piece of the puzzle in how we build applications today. But it's just a piece, you know, it's already ignored the biggest, the hardest problem distributors in distributed systems in my opinion, which is state, how do you manage state in a distributed system. And don't just like put it over in a database somewhere with how can you actually efficiently manage state insider application, insider services, how you communicate how cool I coordinated, etc. And also it didn't do such a good job of fully abstracting over over all infrastructure, it abstracted over the message broker to really make messaging serve a part of the programming model. And not necessarily, you know, databases and cash is API gateways and all these other things that that you have of course it varies depending on the on the product but but it, you know, functions a service leaned itself very mostly to stateless type of workloads. So, and, and, and also, you know, I, when it comes to serverless, many, many applications or products today call themselves serverless, which is great I think and provide a serverless experience, but they all all do so individually with databases are like message brokers or serverless, you know event based systems or serverless or serverless, etc, etc. So it's all great but it actually leaves developer with a complex integration project. So, even if, even if all these API's are truly serverless, we still have to stitch them all together into one single functional system, which is can also be very hard. So, the question is that I've been sort of pondering the last couple of years or so or so is, can we can we do better what lies beyond serverless and I say what lies beyond the current incarnation or current state of the art when it comes to serverless. I really believe that we can we can do better we can we can take yet another sort of step in this in the ladder of abstractions and making the life easier for the for the for developers. And what I think we really need to do is, is, is, is, is vertical integration, it's Steven or gradient red monkey wrote this article is great article, where he talks about vertical integration the collision between platforms and the database and one of the quotes in this in this article is there, there are already too many primitives for engineers to deeply understand and manage them all and more arrived by the day. And if that was not that were not the case, there is too little upside for the overwhelming majority of organizations to select implement integrate operate and secure every last component and serve or service time spent managing enterprise infrastructure is time not spent building its own business. And, and, and you know, going back to Alfred north by tense, you know, timeless wisdom here in a civilization advance advances by extending the number of important operations which we can form without think think thinking about them. And I think this quote very much applies to, to the, to the software industry, we need to continue climbing the ladder abstractions and automate, you know, making as much as possible like a, like a, like a commodity that we don't have to think about. So I think we need to move from serverless to, you know, database less is like is like is like one thing or for serverless to fully sort of incorporate vertical integration completely I think so so if we if we if we take a look at how the application stack look like when you know, or still looks if you're running it on prem but look like you know 10 years ago as well. The red sort of sort of boxes here is that is things that you as a as a user as a developer or you as a company has to manage by yourself. Of course, when you when you run a self managed on prem, it's everything is the business logic is the framework is the databases the transport security Kubernetes if you happen to run that operating systems all the way down to the heart to the hardware. The promise with the with the cloud is that it got us basically have halfway that the cloud providers now not not provide great Kubernetes services that you can just throw up Docker containers on. Of course, you know, everything from hardware all the way up to the operating system but still you know the rest is still on us as developers to a large to a large extent. What we need I think is a new sort of category of platforms that that that that sort of out in which you can outsource everything but the business logic there's really no reason why we can't why serverless and future serverless platforms should be able to manage everything but the business logic for you and still allow you to not just build the subset of the applications that you that you want but actually build general purpose real full blown business applications or enterprise applications. So, so I think this is this least at least for me this is the vision that I have and I think what what we need to move towards and this is a great example of vertical integration where the platform or sort of takes care of everything but the thing that it can't you know your business logic, and and by doing so removes all complexity in like in in sort of setting it up running it managing upgrading it, etc. You know, for the whole lifetime of the application. So, so, so that is where we are with with cloud today and where I think we need to go and you know this, of course, you know this is something that we've been working on a lot with enlightenment. And, and I think we have we have we have a solution you know to to this vertical integration problem in Kali expert I'll get back to more to that later. I just want to say some some things about edge computing as well. You know edge edge I think is really the natural extension of the cloud is not really a separate thing is is is really, I see it really as a continuum. As I'll talk more about later. Gartner said in 2021 that edge computing is actually being implemented today in many of our clients environments enabling entirely new applications and data models. Simply put, edge has moved from concept and hype into successful vertical industry implementations with general purpose platform status approaching rapidly. And I can really echo this when when when talking to our to our clients you know many in the in the global 2000 category, you know they. They're really sort of starting to to embrace edge, but many do so in an in a sort of ad hoc way, and answer most of them are and say all you know but many of them are struggling with how to combine cloud cloud and edge. And, and I think that's that's unfortunate as I will get to just in a second. You know it's it's clear. You know this of course been a lot of hype hype around edge the last five, five years or so, but it's definitely not just talk you know the edge is already here, at least in my experience talking to a lot of these companies. We already have really really so really well built out edge infrastructure so that the cloud providers often you know so they they provide most of them provide in a small data centers out at the edge data centers they're scaled down, but but still allow a programming environment or ops environment they're very similar to to the one in the in the in the regular cloud you know you can offer run Kubernetes and these like having some in these circle edge edge edge clusters CDN networks have all you know they allow you to attach compute to to to you know to static data that they've always always served you know we've seen in companies like cloud flare and fastly and so on, coming up with with programming models to to allow you to run computations out there. And, and like furthest out actually not furthest before this is probably the devices themselves but before we make the jump to the devices themselves you know we have we have the telcos that allows running applications inside the actual cell towers for the lowest latency possible and and and really and five and five G and this new trend of local first software is really changed changed in the game. Here it's, it's, it's too, it's too big of a topic to go into how far G and so will will will change the game but I really think that it will enable a completely new category of use cases and and and allow companies to serve their customers. You know, in a much, much more reliable and better way faster and and and more reliable way. And, and, you know, it's, it's nothing to be surprised about because customers, you know, they're there, they have always become more and more picky and more and more demanding every year. And, you know, have having the ability to move data and compute in a closer and closer to where the end users, you know, is like physically, it's, it's, it's of course something that that that you know can can be like a competitive advantage for many, for many, for many companies and because because how being able to co locate data compute and end user means that you can really serve those users with really really ultra low latency and really really amazing availability, but if you already have compute and data right there where the user is there's no need to go and go fetch it somewhere else, which means that that latency is not necessary. And also, if the, if you know, the database that you're seeing down in the back end cloud or something goes down or the network goes down, you're still fine you can just continue to serving those users as nothing has happened because you have the data that's needed right where you are. So I think that's that's very, very exciting. And, and we are, you know, moving what beyond edge computing in the last five years mostly, I think most have talked about computing moving compute out to the edge. But I really think that's only half of the story, being able to be stateful and having sort of holistic approach to compute and data in physical in a proximity to where the user is this really was going to change change the game here. And just, you know, giving you some numbers is really huge market on the rice here. Gartner predicted that in that in, you know, 2020 we was already the case, you know, it was like 4.6 billion, and it will go up to 61.1 billion. So, which is, you know, quite, quite, quite a rapid rice. All kind of edge use cases are, do we see out there, you know, this is just a short, short, short list. There's of course, many, many more. But, you know, some of the ones that I personally been being involved in are autonomous vehicles, retail trading systems healthcare emergency services factories stadium events, you know, serving sports and concerts to these type of things. Very efficiently, locally, locally, and you know, gaming of course falls into that category often, farming, financial services, smart homes, etc. So, you know, there's a growing list here or edge use cases can really change the game. And also, you know, tsunami warning here Gartner predicts that in 2025, but 75% of all enterprise generated generated data and all generated data will be created and processed at the edge itself. And that is that is an increase from from about 10% today I think today was actually last year if I remember correctly. You know, this is, this is, this is a game changer and it's, and it's this, this of course means, you know, that, that, that we have to keep as much data at the edge as possible, we have to move processing at the edge. So we can actually tackle as much data as absolutely possible out there, and not having to put like shove it will channel it all the way back to the, to the back end cloud into processing there, and then going back with the with the intelligence or the with the answers so the, you know, insights that you might have from processing that that that data. You need to be able to process that out on the edge itself. And I've already talked about that you know the benefits are obvious in real time processing meaning faster answers to the users. You know, things can be way more resilient as I talked about because we don't have the we don't rely on a stable connection to the back end cloud. It's a lot more resource efficient and environmentally friendly, you know, help help helping tackling, you know, climate change and consuming less energy, which is, you know, we all want to do in these these days. So all this and all that's it's quite exciting, you know, that said, and it's, it's of course, all trade offs. I'll go into that in a second the first I just want to say that you know edge means different things for different people. And the way I see it is a sort of a hierarchy of layers it's not black and white it's not either you're in the cloud, or you're in the edge. There's nothing in between the service service a lot of gray area here. So first I mean there's there's definitely more than two, I think but but if we like to be have to be like recently course grain here. We have at least you know cloud near edge far edge and the devices you know some call them far edge edge clusters some color near edge regional, you know there's we haven't really sort of settled on on on a on a good common vocabulary here I think, but, but it's really it's really, you know, more and more of stuff, you know, more and more know it's more and more of serves points of presence is all the way up to the devices you know where they can be literally hundreds of millions you know depending on what kind of use case you have. So, and the interesting thing is that you know each one of these layers has his own opportunities for us as developers, but also it's limitations and sometimes we need to lean into the constraints and lean into the limitations to really get those opportunities. So, if I should just you know walk you through a little bit the way I see it at least you know, is that further out towards the devices on the right hand on the left hand side and you have further in towards the cloud to down on there on the on the right hand So further out towards the devices, as I said you have you have looked like more and more and more things you know 10th or south and so hundreds of thousands pops to coordinate while further down the cloud is is usually sometimes in the intense, or up to thousands of nodes to to coordinate further out, you usually have more unreliable networks and hardware, while further in, you can usually trust the networks and be in there and you know, in public clouds today, the you know the hyperscalers cloud clouds, you, they're usually very very reliable, you know, comparatively, at least, and same and same holds for hardware. Further out, you have more limited resources and comp and computational power, which of course influences the way you design and think about these things you know underneath, while we're further down in towards the cloud you have your vast resources and compute power. Further out, you have it leans itself more to real time low latency processing as we talked about you can directly out there come to compute or return with you with the answer immediately, but further in, you have more batch high latency processing. Further out, you usually have weaker consistency guarantees is like you're essentially forced to have a have them all that can work with weaker consistency guarantees and while for further in you have we're spoiled to be able to put things in a strongly consistent database and lean on strong consistency. You know, further out, you know, we can take local decisions, faster decisions and less but less also less accurate decisions, because there's less per computational power, and we usually want to want to return in answers in real time. Further in towards the cloud, you know, of course we have we can look more globally, we can make decisions on a global data set. But it's definitely slower, while better you know better in the terms we can take more intelligent decisions. So often I see sort of a high hybrid approach where we can we can we can we can get reasonably good decisions immediately back to the user while we then also channel all data back to the cloud for more sort of batch processing probably not batch processing these days but more in line with batch processing takes more time to do more thorough analysis and once we reached you know some sort of of intelligence, you know, we can funnel that back out to the edge, and that can influence you know the edge services out there. Further out is definitely more resilient and available, you know, from the perspective of the of the system, you know, or from the perspective of the user. That said, you know the hardware is less is less reliable so it definitely affects the way the way designed but having data and compute and the user at the same time place. Leads itself towards more resilience and more available systems, and you know contrary less resilient and available from the perspective of the user out there. If you put things in the cloud. And further out requires you know more fine grade data replication and mobility you need to move with the user, you know, compute and state need to move with the user as it as it moves you know thinking for example autonomous vehicles or whatever you know actually physically, and then as you as you move you know across the country of you know the data often needs to move with you. But while further in you know you can you can rely on more course grain data replication and more more traditional ways of thinking and designing the application. So, so what do we need to tackle all of these like, like very different requirements. I think I think we need, you know, one of the cornerstones in the way I've always thought about software, you know comes from the actor model in which you have autonomous self organizing components we call we call them actors I think that's that's a model that really works very well out on the edge. We need physical co location of data computer and user they are that usually abstracted by these autonomous self organizing components sort of act rich type of components. We need also intelligent and adaptive placement of data computer and usually we need some intelligence in knowing where data should be ideally, you know, if possible predict where the user should be unable to put states right there in advance. That's not always possible and it can sometimes be very hard to do, but you know, but at least do it with very low latency on demand. And there's a lot of very interesting research you know marketing club man and the many others you know are doing around local first cooperation or local first software in which which you know so you invert the way you look at it. The system, you know should should work fine, completely fine in, you know, completely locally, you know, and if we happen to have a cloud that we can use sometimes for all the time, then we happen, then we then we do that, but we don't we are not trying to be compliant on always in using like a database over there or or some cloud services over here but we can we can function fine inside factories for example or or or inside stores like etc. There's a lot of stuff going on there I encourage you to look into that more. And of course like fine grain adapter replication the further out you are on the edge. We need to be more and more fine grained you know be able to do replication selectively not replicate everything everywhere that simply doesn't work simply doesn't scale in how do you do that then ideally an adaptive way so the system can learn and optimize itself along the way as it's as it's being used. And there's also no such thing as like once it's one size fits all when it comes to consistency here. We need tools for eventual consistency, causal consistency and strong consistency, and have them sort of work together in concert in in in in in good ways. And the way I, I view it is that you know in the way we're looking at that we have solved in in in Calix is that we have options for what we call state models are like shapes of your data. can be tied to a specific consistency model a specific replication model, etc. a little bit more high level way of reasoning about you know replication and consistency by just looking at what you know what my state models means and what kind of semantics it has. And of course end to end guarantees you know I mean we we need the system to take full responsibility you know all the way. And naturally that's very hard if you as a developer has to stitch together multiple different pieces that you don't own that you perhaps just download or or bought or whatever. We need platforms that can really take take responsibility of end to end guarantees of the SLAs. So, so, so my vision is that is that what we need is that is what I call the cloud to edge data plane, serve an abstraction or platform that can sort of allows us to build application this cloud to edge continuum. And because he's really easy continuum, the way the way I view it, you know it's it's really if you run in the cloud, or if you're running the edge should not be designed decision definitely know you don't want to hardcore that it shouldn't be a development decision. It should be it's it's it and you know it should actually not even be a deployment decision it should ideally be a runtime decision but it should at least be a deployment decision that you can choose to say I want to put this thing over here and that over the thing over there. And, and, but you know as as as always, it's really hard to predict how the application is being is being used. And, and, you know, if you get sir what we used to call slash dotted we know what we know when I started you know it like black Friday or so whatever you know it's like the system need to adaptively scale and work with with how it's actually being used to today with me scale up and scale down and move so adaptiveness is extremely important here. It's of course really important also then that these these services are truly location transparent that they can run anywhere from the public from the public cloud out to like thousands of pops out at the edge. How can we then find though the these right abstraction, you know, Timothy Keller said that freedom is not so much the absence of restrictions as finding the right ones, the liberating restrictions. I think you know I've learned often the hard way you know but I also learned to really appreciate constraints and the fact that constraints can actually be liberating. It's not just for software courses to forever for everything. But I think it's sort of been been something that's been guiding you know the way I've been thinking about software for many years and the way I've been designing products and apis that that you know constraints the right type of constraints is are you often and having them in a first class these constraints like that's also been a hallmark in always been a hallmark in in building ACA that these constraints of the network and the constraints that we see distributed system should be first class. And that can be liberating in in how we how we how we design software. So, if we if we would then try to distill the ultimate, you know, ultimately close here of course no such thing. But one step towards at least a better or a better programming model for for this cloud edge to continue. I really think that that that there are, you know, three main things. And these are the three things that we as developers in my opinion can never delegate. You know, it's it's it's first the data the data model, how to model your business data, how to strike how what kind of structure it has what constraints is you'd have what get what guarantees you need to have how you want to query it, and so on. The second thing is the API. How do you choose to communicate with with the outside world, and how do you want to communicate between services combine them in workflows and so on. And the third one, you know, is is the business logic. As the data flows as these data models are being are flowing, you know, or are being invoked, I mean, how what is the business logic that makes it all, you know, tick, so to speak, how to mind intelligence how to act and operate on data how to form a down sample and relay or trigger side effects and then and ensure you know that that what is side affecting and what is not. I mean how to use workflow and point to point communication patterns like point to point pubs up streaming broadcast and how to pull it all together, the data model API and all these things around the business logic into one single programming model I think that's really the key the key here. But that should be all that we have to focus on these three things the rest can, and in my opinion should be fully managed and fully automated by the underlying platform. And, you know, that leads me to Calix this is really what we saw that the the task that we took on about three years ago. And that we've been working on scenes we launched it in May. And it's really is really trying to build this like fully managed developer pass for building real time event driven cloud native and edge edge native applications. So we're trying to look beyond the current state of the art of service serverless today, and not just abstracting you know what what functions service does but, but we'll actually abstract away all infrastructure into single declarative pro programming model which means that you don't have to think about the database anymore you don't see the database you never see the event process you never see the caches no API gateways no service measures you don't you can declaratively configure security, and so on and so on and it's, and it's fully polyglot you know like and has a history of building tools for the JVM with Java Scala, you know, play framework and art and so on. But Calix is fully polyglot you can use it from from many many many different languages the ones we we are officially supported supporting right now only like four months after the launch is Java JavaScript TypeScript and Scala. But you know there are SDKs in many other languages and as there's a need and demand we will definitely add more. And really tries to unifies this this you know the idea of cloud native and edge native development into one single programming model. You know abstracting enough for the runtime to take the decisions of how to most efficiently move data around and and and and you know allowing you through the state models that talked about. At a high level defined you know the the constraints that you want to have on that data in terms of of consistency and guarantees data integration guarantees is it's reactive at its core low latency have through but it's all running on ACA, gRPC and and Kubernetes. And it's really sort of embodies this vertical vertical integration as a service you know you have essentially all of that in Calix we abstract everything but the business logic. And as I said you know the data modeling and the API definitions, but that's all you need to do the rest is on us, and we operated for is fully managed we operate everything for you. So how do you build a service in this is it's three steps API description defining your state model, and then finally writing your business logic. So I'll show you a little example here in this sample I'm going to show you are declarative so API first contract first SDK. So we are working on a code first version. The first one will come out is for spring developers, we use spring annotations to define things declaratively but right in the code. But here in this since that hasn't been released yet I'm going to show you here. One that is based on pro on protobuf for for defining the schema, but the same ideas hold that you first define your API and your data. And in this example let's create a simple shopping cart. And I'm just going to show you, not not not all of the code but some of the more important, you know, pieces of it so you get so you get the idea of how to work with it. So let's now to define the events and the data model. We start by defining this add line item event. And there's one, you know, key as sort of annotation as as is called in in in protobuf language, which is this entity key so we set a user ID, and we tag that with entity key and this is this is this then becomes your your primary key and and which is used for routing for for for for querying it for sharding and many different things. But apart from that then we just have the product ID. So we define an API that sort of that that makes makes use of this of this of this event here we define a service shopping cart that has one single RPC method here that accepts an add like add line item event. It returns empty or it doesn't really do anything with with a return type. And as you can see here, we have the ability here to add options, you know, optional annotations here we here we add a Google API API annotation. That allows us to add sort of a post your eye to this, which means that you know it will automatically generate an HTTP endpoint. It will it will like by default also generate a gRPC endpoint since you know this is protobuf, you know, it goes hand in hand with with gRPC but it will additionally generate also an HTTP endpoint here. And then we also add one option for eventing here in which we can define the in and out and in reads the event from the event log and we're here we can define that as as you know the event log shopping cart. And then out. We define a topic shopping cart events. So that's that's essentially everything we need to do when it comes to to the API. Of course, there might be more, you know, hgb endpoints and gRPC endpoints that you want to create but you know the same logic holds here. The second thing we do is we define a state model. That's sort of the sort of the shape of our domain data. Of course, it's extremely important that you that you pay a lot of just to take a lot of time of defining your domain data, you know, carefully. But once you've sort of thought about how we should how we should be then it then you can choose sort of state model that it should should should, you know, behave according to so so to speak. Here, here we have we have we have said that it should be of the type value entity which which is our way of saying that it should be a key value type. But we can just change that we like to event sourced entity whether changing one single line of code here. And the third sort of state model that I haven't talked about is a sort of data structure type, the back by CR crdT's that are, you know, a very efficient and very, very reliable and available way of replicating data that is eventually consistent, but it's insurance always be consistent eventually, so to speak. Then we also have the option here of defining, you know, how we should query our data, we have the notion of or the concept of views here so we can, we can add a DRPC method will say we'll get get get cards that returns a stream of cards. And here we can say okay now when a query my my data model, then by creating, you know, some sort of a materialized streamable adaptive view. And you do that by simply defining a select statement there are, it's not, you know, answers equal but it's a little similar to sequel, and it gives you the ability to query your domain model. And here we say a select star from cars were user ID equals colon user ID. And that user ID you know it's the one we saw earlier. Then we also have the option of adding an HTTP annotation for this you know in this case where we defined as a get and add slash cars which mean that you can then get that this this this stream likes from that URL. We're also currently working on we haven't released that yet but working on supporting joins, which is one of the highest demanded features. So, so now we define the API or data model. And the third thing is that we need to write our business logic and here we can choose whatever like favorite language we might have here I chose to show you know some JavaScript code. So the only thing we need to do here is simply define a business logic function is essentially, you know, one line or a couple of lines of code. And what's what's really interesting here if you look at this function at item that it has three different arguments here. And those are things that are injected into the function as needed. So when an event derives it's injected into this function, and the function is invoked. For example, and in this case and add item event, but we also inject the state. The state is outsourced is managed I'll talk just more about that just in a second but is outsourced or managed on behalf of the function just as communication. Eventing is managed on behalf of the function we also manage state on behalf of the function and inject that accordingly and only inject that when there's new state available. You always have the latest state without having to go and pull it you know all the time, and by doing so we can ensure that it's always as efficient as absolutely possible. There's no, you know, we don't have there's no responsibility from the function to do to think anything about communication or state management is just injected into the function on an as needed basis. We always inject the context for you to do you know additional things with inside the calics environment. That's it essentially it's it's it's extremely simple. You know one of the key innovations is want to say talk a little bit more about that is that you know we you know function as a service allows you to fully abstract over communication. You know care about how the event arrived into your function it's just injected into your function, you know, and you're invoked whenever an event comes in sort of triggers the function. And once you're done with your business logic and doing some sort of action. Then just admit your event and you're done. That's all there is to it, you know, it's not your responsibility to care at all about how events, you know are persisted how they're relayed, etc. So we have all outsourced to the platform. We do exactly the same thing with state. So we have state in injected into the function, always the latest one always as lazy as absolutely possible but always correct. So once and once the user function is done it doesn't need to do anything you know as you saw it doesn't even need to return the state the state is automatically sort of sort of taken care of for you and and and you know if needed replicated if needed sort of stored on this if needed whatever you know depending on whether it's been updated or not. One of the reasons why I think this is extremely important to do is that, you know, for the function is a black box to the runtime, the runtime can really has no idea what's going on inside the function because this developer the rights and uploads it or pushes it into into the into the platform. This means, you know, this means that if the if the function is responsible of managing database access. This is really really hard to automate database of operations and replication and caching and all of these things is really because it's really because the black box to the runtime is really hard for for the runtime to understand the intention of each data access. You know, for example, is this operation a read or write operation, can it be cashed safely can can consistently sometimes be relaxed stories is strong consistency always needed. Can operations proceed during, during partitions during failures or is it like no need to stop here and reboot. At one time can understand these things and actually you know sort of seen to the function which he can't you know this is why we outsource it but he can can manage these things. It can take better decisions automatically this becomes even more important as we move to the edge, where you need where you need like radically different ways of optimizing these things. So examples for example is like if let's say the right operations are really fast to read operations are really slow I mean then we can make sure that we add more memory to the to the to the services and to the database and to the data management. If we're for example always reading immutable values. If we know that and then we can safely cash them. And for example if we know that rights must be serializable at all to call you know that we cannot sharding so we have single writer per service per entity. So, so, so, so in a way we know so we want to constrain database access using well known patterns that we can understand and control and also you know look across more than one function. So if we if the platform manages all these functions and everything all state management is extremely is externalized, then we can look across all of these functions and an optimized no database access more more holistically more globally problem globally and on a global scale but at least you're looking across subsystems for more efficiency. What does it look like under the hood. It all runs on on on kubernetes on kubernetes, we have we had what is called our execution cluster that that you know that can run in many different cloud cloud cloud cloud cloud providers. And the execution cluster is where your code is running and it's like wrapped up as a as a as a project any night in this project have services, and each service is sort of composed of two different parts. We have what we call the state proxy. Or the calix proxy side car sometimes, you know, also what we call it, but the important thing here is that it's like it's always like sitting right next to the user code. And I'll show you how it all works together here. I think there's a request coming in from from internet or from from another service or somewhere, you know, then it then you should then it's in this case link or D. There's a relay seat to the right state proxy, depending on you know how it's what kind of, you know, sort of entity keeps coming in. It always hits the state proxy first before it never directly talks to the user code. So the state proxy is truly a proxy that always sits sits in front of it and manages communication and state management on behalf of the user. And, and I don't want to go too much into detail but all these state proxies form an aca cluster, you know, serve that that functions the same thing as like dynamo or Cassandra sort of forms a node ring. We're using epidemic gossiping in order to make sure that that that everything is and use a cluster sharding to like allocate out all these, all these, all the state you know in the most efficient way. The important thing you know is that it's the state proxy than then then then and the serve the state proc this is the network of state proxies that together can manage the user code or each individual service as efficiently as absolutely possible. Depending on where it's running, you know, if you're running in the cloud or running, you know, closer to the edge running running wherever it can apply a different type of, you know, replication strategies. You're in the constraints of how you define your data model, which is of course key here. So in summary, I really believe the cloud. First, I say, I don't want to rant, you know, cloud computing and the cloud infrastructure is amazing today. I mean, I'm blown away. I'm just trying to put myself in my shoes 10 years back, I wouldn't believe it, you know, but it's still too complex. We're at the pace that most companies want to want to move at. And serverless is extremely promising, but it, but it currently falls short. And that's why we created Kalex to try to push it forward. You know, making it really live up to what I think it's true promises, even though we're not there yet, of course, we're still working on it. And we need to continue to climb this ladder of abstractions, you know, we need to automate as much as possible, think about less and focus on delivering value to our customers. We don't want to be down in the weeds and, you know, staying up at night and wearing beepers and all of that stuff, you know, we just want to focus on building value for our customers and that's it. And to do so, I think we really need to embrace vertical integration. And not end up with an integration project at our hands to maintain indefinitely. Edge computing is the, you know, it's the natural evolution, I think, to cloud computing and I see it as really a continuum. It's already here. And it opens up a world of possibilities and but also challenges and of course, you know, opportunities go hand in hand with challenges since there's always trade off. But I think we need to start thinking about seeing it as a, let me have a cloud to edge continuum in which, in which, you know, where we choose to deploy is not something or what which actually which tools we use to build the application is completely orthogonal, you know, to where we later will deploy it. It's it shouldn't dictate. We because we might even not know that yet we might build a cloud application today but as the edge infrastructure builds out we might later want to run it closer to the customer that we don't want to rewrite it or patch it or write another system that we need to then start hooking together. Ideally, the platform should support us in like stretching it out towards the edge, as we feel comfortable doing. And to do so, we need to rethink what is absolutely necessary and what can be delegated and I think, I think, I think we need a new programming model and developer experience to serve this as this cloud edge continue continue. And, you know, I, we sort of landed at, at, you know, your data models to through through data shapes to a mistake models API and business logic the rest should really not be be the any concern for the developer. The rest should be on the platform or the cloud providers and the products, you know, pulling this together for the developer in a fully managed way. And, and, you know, and so Calix is our modest contribution to this Calix is really here to help you build these type of application this polyglot is real time is event driven is reactive is based on you know, 13 years of building these type of systems for for for clients but wrapped up in a very simple function, function as a service, you know, but stateful and stateless dress up in a way fully serverless. And you can use it in the in the in the cloud today. We are working on extending this to the edge. We're not fully there yet, but I think we landed at a model that that more or less works. As an extension further further to the edge and you know where we are. I think we will, you know, be fully there in a shortly, so to speak, but you can definitely use the in the cloud today and, and, and you know, do so. I mean, go to Kate to drop it to calix.io sign up to our pace you go model and let us know what you think, you know, and, and let us know what you what you need, essentially, and where we think we have where we're lacking capabilities. We're very excited from you. And we're very excited about where we're going, I think, and I think we're onto something. But, so said, love to hear from from from you and learn from from people out there if you're seeing the way the world the way the same way as, as I do. So, so that was what I had to share today. We're almost at the top of the hour, I hope, hope it didn't make you all fall asleep. Just reach out, you know, to me, or I'll sit like man if you have any questions and you can you can find me you know and Jonas at like man.com, or, or, or you can find me on Twitter, etc. Okay, thanks. Have a great day or evening, depending where you are.