 Hello everyone, this is Jan from Google. He's going to be part of Kubernetes Asia conference. There's apparently about half this conference is that. And you're going to talk about the future involving AI and Kubernetes or something that was quite vague. You go, do you think? Everyone say hi. Enjoy. Yeah, it's very, it's very vague and that's actually on purpose because I had the feeling there's already pretty cool Kubernetes talks around in this conference. But I was missing a certain type of talk that is a little bit more maybe architectural and maybe talking a little bit more about the ideas. So this is not going to be as fancy as Philippe's stuff and not going to be so humorous, but maybe gives you a few ideas around Kubernetes. Okay, so let's dive right in because we're a little bit behind time. Has any been to my BigQuery workshop this morning? Awesome, cool. Thank you very much. So for the ones who haven't been, just very quick intro what BigQuery is. BigQuery is Google's data warehouse. You've just seen Philippe working with that. So what I like about BigQuery is that it actually brings you back to the original spirit of SQL, of having this unified language with which you can access any type of data and really define an insight you want to have and that's actually really, really nice. So that's really beautiful if you're coming from like a world where you have to manage database connection strings and stuff. And I was wondering, so what would we have to do to have the same kind of like expressiveness of SQL and a magic engine like BigQuery for code, for applications, right? How would we have to write applications that it becomes as easy as writing an SQL on BigQuery? And that's going to be basically the premise of this talk and see, okay, so how could we use Kubernetes to arrive at something like that? Spoiler alert, we're not there yet, but maybe I can give you a few ideas also to contribute to Kubernetes in the future. Okay, so what's the problem? Why can't we make that so easily? Well, a few of you, I'm a little bit old. I've seen data and logic be coupled. It wasn't a good idea. It didn't work. Lots of nodes and so on. Great systems. I actually liked them, but didn't scale very well. So this is the wrong route to go. Why is this wrong? For me, the biggest difference is, and there are some nice research done by Nicole Forsgren on that, that the development teams are actually most efficient when they're running into a DevOps mode with one common problem and ideally like domain driven. So you want to have one small core team and really working and iterating and prototyping on code. So that's very different from the way we work with data. Data is way more, we write a query, run it against a query, so it's way more back and forth like a conversation and code works more collaboratively. So we need to solve that problem. How do we solve that? OSS to the rescue. We've seen really great developments in the open source community that I think guide us a little bit in the right way. I guess most of you have used GitHub. GitHub, when GitHub introduced a fork, it looked a little bit normal, but over the years we learned that actually social coding is a pretty great idea and that even influenced the whole culture of communities when forks were spawned off and merged back like the Node.js story, for instance, I really enjoyed to follow that and that's actually a nice way of building applications. If you go one step further, you can even come to systems like Glitch. I'm not sure if anyone knows Glitch here. For them, the fork is the root of everything and everything just bases up, based itself up on a fork again. They call it remixes. Really great for beginner coders because you just start with code in the browser and you build up someone else's work and just combine it again in a really fluid way. So that's really, really nice. Unfortunately, we can't code that way because we need to also be production ready and we can't break for users all the time production. I worked for Google. I worked in a little bit like an SRE kind of like role. I can tell you on-call is not funny if you're being paged five times a night, which luckily at Google never happened to me but I got paged a few times and that's so we want to be production ready. So we need to take this kind of like idea of forking code and actually make it resilient and make it scalable and make it really unified. What does that actually mean? Well, that very simply means infrastructure is code. Seth is going to talk about that tomorrow but basically what that means is if we only have everything in code, then our systems are predictable. But everything is really hard, right? Because everything means not only the definition of our infrastructure but that means our database should be versioned as well and that even means our monitoring, our documentation. Everything has to be versioned alongside in the same tree. That's actually what we're doing in Google. There's a nice talk on that by Rachel Potvin. If you haven't seen that, we only have one code repository that stores all of that while I just mentioned alongside in one head version number. I'm not saying this is the perfect idea for everyone but it allows some of those methods that I just have mentioned. Really working with code in much more fluid, organic kind of way. If you do that, then you're also automatically kind of like cloud native because you're actually independent from your underlying infrastructure and you're working on those kind of premises and not any more assuming things about your production system that is not defined somewhere else, just in the heads of one person somewhere. That's what Chris was talking about this morning. That's great. What can we do? How can we arrive there? If you look at how systems were integrated a long time ago, this was usually direct to systems talking to each other, remote procedure code, lookup tables, if you used messaging, then you used something like pipes. Then we got a little bit looser coupling. We did things like static routing, J and EI, Java 1.3, queues have been around for quite some time. Then at some point we arrived at like a dynamic routing model and for some time this was basically state of the art. So you would have an ESP. We still have API management in that direction and we have all kind of routers and that's typically what you still see in most kind of enterprises because that brings to this predictability. The problem with this kind of model though is that this predictability and this behavior in the end of your overall system is now defined outside of your system in something else that you don't really control and that always brings new problems. Could tell you countless stories where you deployed a production and oh, suddenly your message bus is differently configured from the one that was on UAT. Who did that? Oh, I don't know. Some other random person. So those kind of problems occur all the time. That makes the systems less predictable and with that decreases your iteration speed and your product development speed and your mixing flexibility if you want to call it that way. So in parallel to that we saw a development of like PubSub patterns Kafka and PubSub generally became a lot more popular and the overall Hollywood principle, asynchronous APIs and so on and so forth. And in parallel, I'm a big Erlang fan so I have to mention here agent and stream systems. So this also developed and is maybe the most informal of all of those models of integration. If I have a stream based system, everything is agent, everything is fluid and that's great. The only problem with these more informal approaches there is that the more informal they get, the more complex middleware they need. So Erlang is great, but it's really only running on the other VM. Streams is great, but you need a really, really rigid platform for that. And while this is perfect, if you can completely rely on that platform, that will usually or very often cause you to need to integrate systems out of your platform. And then again, you're in this loop of like, well, it's not actually predictable. So let's say, for instance, you have a stream based system, there's really cool work done by LightBend, for instance, the whole reactive manifesto stuff. It's really cool, but then you need a database that runs in that and you don't have that. So suddenly, you have something outside of your system that you need to handle and with that you bring entropy in. So how can we solve this? How can we reconcile these approaches in a better way? Well, Google has been doing a little bit of that for some time, and I can explain a little bit about our approach and how that feeds into Kubernetes. So I'm not sure if you've ever seen that picture. This was Google's first REC server. It's actually in the National Museum of American History. It's called the Cork board server because those boards are stacked in Cork. So why is that interesting and relevant? If any one of you has seen RECs in the 90s, they look very different. And if you looked at a REC in the 90s, when I was still caping the kind of stuff myself, and you would have somewhere like a sun down there, and then you would have a few pizza boxes to compute and then bigger ones that maybe had some backup or disks in there. So you would have all of those what we sometimes call snowflake servers in there. If you think this is 1999 or I think it was built in 1997, the basic idea behind that is that from the very, very beginning, for Google, everything was a software problem, right? So we didn't have different servers for different types of stuff. And we actually only had two types of servers, compute heavy machines or storage heavy machines, and to this day have. And all the rest was organized by software. So basically it was just the role of a motherboard and that's why everything here looks the same, which is actually pretty cool. And this industry standard components, nothing special in there. So this inspired how we built our architecture. And then there's two main concepts here. One is containers. In this talk, I kind of assume you a little bit know what a container is, but if you want the one liner, it's a very light VM-like, immutable process isolation. So you can take something packaged in a container and then the container can run anywhere, but you can do that in a very light way as opposed to a VM. I'm not saying that VMs are bad, VMs are sometimes really great, but containers are a little bit more fluid in this whole idea of agility. Now unfortunately, if you have all of those little lemmings running around those containers that die constantly somewhere and spread around, you need something to manage that. Google's approach to manage that was Borg. So Borg is a cluster management system. The biggest difference from Borg to other systems is that it's declarative. So you're used to in a cluster management system, say very precisely, these are the servers, that's where it runs, and you basically would script something like, I like Ansible, for instance, so you would have a very clear script of how a deployment works. And Borg is actually the other way around. So Borg is actually more of a configuration store. Borg says, dear cluster, we have a new application, say, high application. This application would like to run on 70 servers. It needs a quarter CPU on each server and 1 gigabyte of RAM. Dear servers, please democratically decide between you who takes up this load. That's a very, very different approach, right? So it's not like a service call, but the servers constantly go back to the central configuration system and say, hey, okay, I have spare resources now, what can I do next? And this allows a very, very horizontal scale, because in the end, it's actually what we just spoke about agent systems, the stream system. In the end, it's an agent-based system, Borg, in itself. So that inspired Kubernetes. Kubernetes is, if you want, the open-source third iteration of Borg, and is also what's running on the GKE, so the Google Cloud Kubernetes Engine. And that's where we're coming back to our, to the title of my talk. I skipped it a little bit in the beginning, but maybe a few of you were wondering, what does choreography actually mean? Such a complicated word. English is not my native language, I can't even pronounce it. So the difference to choreography is if you think about the ESBs, they always talk about orchestration. Orchestration means a conductor standing in front of an orchestra and telling everyone what to do, right? That scales well up to a point, but at some point it doesn't scale anymore. And as I just said, Borg is the opposite. Borg defines rules, and then everyone picks up what they think, according to the rules, is the best task to do now. And that's what you call choreography. That's like a big dance of everyone, and everyone knows what kind of dance there is, and there is the rhythm of the music around it, and that's how the system acts. So what you're getting is a, and I really love this quote by this perper from Burns, Borg, Omega and Kubernetes, which traces this history, you get a desired emergent behavior. So you have to imagine that emergence means something like, you know, like ants. All the ants together in itself form an organism, and it's actually more than the whole system is a lot more than its parts, if you want, and that's the whole idea behind Kubernetes. So this is actually the most important message I wanted to bring across today, that you think of Kubernetes in that way, not as a process isolation tool, or not as a cluster management system that you just drop into your whatever you have currently, but you have to see it as a way to bring your applications toward this fluid way of building systems. I personally, I come from a mobile application background, so I used to always work on more cross-platform mobile applications. So very early I did cross-platform stuff, and that's when you saw this whole stream idea really coming in, right, because you can't just use sessions anymore that are cookie-based, and you have different applications where you need to pass on a session or credential information, and at that point you're very early realized, or I then realized that it can only work in a distributed agent-based system where this kind of information is just part of the system, and not a sticky session that sounds like load balancer, if you're lucky, you always get the same request. Okay, so let's go a little bit deeper and see what we can do with that. I do a very quick mini demo, and let's hope it really goes fast. I thought in the middle of the talk it's nice to give you a little bit of a relaxing eye from the white background of the slides, and just show you very quickly how Kubernetes looks in case you've never seen it before. Again, I don't want to do a Kubernetes course, as other people are much better on that than I am. So this is, okay, yeah, I'll make it a little bigger. This is GKE, Google Compute Engine, Kubernetes Engine, and you see here this is a Kubernetes cluster. A Kubernetes cluster is just machines, in this case it's called FOSSAsia Istio, and you see this has a cluster size of 5. What's interesting here is Kubernetes, as you might now, runs on all infrastructures, right? So this is not a Google feature. You can have the same cluster defined on your local machine, on a set of Raspberry Pis. If you've ever seen that Kelsey Hightower's book, I can only recommend that. He has an appendix in that, how to build that in Raspberry Pis, or inside your CI CD system. It's all the same. The cluster is always the same. You just change the configuration a little bit. So if I go into this cluster, I see the machines running in there, and I see the nodes. Okay, so what I want to do now is this cluster has capacity, so I'd like to run an application. The application is very simple. It's a small, it's just a small web server. I have that hosted on a container registry here. Container registry comes with Google Cloud, and all it does is a web server that says hello FOSSAsia. Okay, so I list that. Okay, great, that thing exists. Great. So let me run that server. I type this command here, and what this command does, I could just as well also create a deployment in the UI. Kubernetes is API driven, so I'm just showing you a command line way, but there's other ways to do that. And what I'm doing here is, dear Kubernetes, run this application from this image, and I want it exposed to port 8080, and I want two replicas of it. This is a very simple command. Run means deploy and start. I could also first deploy and then start to like a rolling update, for instance. I can also do auto scaling. There's like all of those kind of concepts are built into Kubernetes, and then Kubernetes figures out how that works on your platform. But let's take the most simple example now. So I just type that in, and in parallel I go to workloads in the UI, and see what's happening. Okay, so awesome. It has created this deployment. So let me quickly refresh the workloads here, and perfect. So here we have it. It's been deployed. You see there's other workloads on the cluster running happily on other nodes, and actually already, this is already ready. And normally when I do this presentation, I show that it's pending, but this time it was a little bit too fast, so this application is actually running now. Cool. Unfortunately I can't access it yet, because it's not on the internet. It doesn't have an IP. So let's go back and expose this application, and I want to expose it, and if I expose it to the internet, I want it behind the load balancer. And again, what this is doing here is it basically tells the Google Cloud Platform, please create a load balancer, get a public IP from your pool of public IP addresses attached to the load balancer, and call me back once you have that. So let's refresh our deployment here, and let's see that. Okay, cool. So we have a service created. That's already good. A service is to Kubernetes name of something that is exposed and has a fixed API. And now let's see if we already have an IP, and we have external IP pending. So I have a few more seconds to talk about it. So what is, again, what I like here is that you would type the exact same commands on your local machine, right? It doesn't matter where you get your container from, or how big your cluster is, all of that would be the same. Not only commands, these would be the same API calls. The Qubectl is just a very small rep around the rest API, and you can just use that and build your own UI, and there's lots of UIs for that. What's really nice about the Kubernetes open source project is that right now Google in Norway is the main contributor anymore, right? You see lots of other companies really super actively investing and building super interesting stuff. It's a real big ecosystem, and that's also why it's so nice to follow that in many areas, and who doesn't even contributes because they say, hey, there's others who do this a lot better, and it's really nice to see how that develops. Okay, so let's try that again. Perfect, we do have an external IP. So hoping that the network lets me through. I'll just go to that and open this, and if we're lucky, so I'm hitting a go. So over here it seems that the network is not letting me through. If I have an issue here, so I can very quickly tell that on my phone. The port 88 is usually not open. I could have exposed actually just a regular SSL as well thinking about it, but those are the things that you usually discover later. Perfect, and here we are. So as I said, it's a very simple HTTP server. It's not doing much, but it's saying hello for Zasia, and so that you trust me that this is actually deployed to Kubernetes. I output the host name here, and this should be one of the pods that I'm actually running on. Yeah, so that's the second pod that I just hit. So you see that's very, very simple, and that's basically all about Kubernetes, and yeah, you have lots of options in there. So let's go back to our presentation and see how that can help us. Okay, so we've seen this is all very easy. It goes very, very fast. It's really nice, and now we have one problem, and the smaller and more heterogeneously interconnected services you build, the harder they actually become to discover and observe, right? This is something that happened is super interesting. It happened in the microservice community. In the beginning everyone was like, yay, let's build more services, and suddenly everyone was like, whoa, whoa, what are all the services doing here? So it came back a little bit to a classical architecture problem, and as I used to be also more in like tech-lead architecture kind of roles, I find that a very interesting problem. So let's start a little bit first and see what Kubernetes does. So Kubernetes already has some of the best practices baked in, most of them, for instance, from Google's side reliability engineers, stuff like rolling, rollouts, and auto-scaling and so on. It's already in there, and as I said before, there's already many, many plugins. So what we can do is we can build something on top of Kubernetes, but on the same layer, right? So we're operating here in an infrastructure layer, so why do, why don't we build something that takes more of those architectural concerns into Kubernetes? And we can do that with a so-called service mesh. You see here I put the micro service mesh in brackets because it's actually not so much about micro, but it's about connecting all the services. Service meshes are not so new, and there's a few around. I've mentioned Pivotal, the Netflix stack already, for instance, that's a very mature one. The difference to Istio or to service meshes on Kubernetes is, again, they operate one layer lower. So you can have polyglot applications. You can have everything running on that. Everything from your database, to your Go application, to your Java application, and none of them really has to know anything about the service mesh, and that's what makes them so interesting. So what kind of service mesh do? It basically wraps your services into a sidecar, into a proxy, and that proxy can do all kinds of things. Everything from like instrumentation, suddenly you get very, very detailed data about your service interactions. It can do security. It can do rollouts, and it can, for instance, only show features to certain users, all those kind of things, that otherwise you would have to code in your application. And that now, again, becomes part of your infrastructure, and part of your coded infrastructure. So this is something that you can suddenly define now, and have in a version system, and say, this next release is going to be rolled out in this rolling update way for those kind of users, in your code. And that's something you can just start. And that's really, really nice. What service meshes also allow is much deeper testability. So Cindy Friedaran is the guru here, and she calls that real integration testing. Real, because you can actually do integration testing in production. So you can do fancy stuff, like, for instance, take live traffic. You don't want to interrupt your users, but you can actually branch traffic. You can take the same traffic and direct that through integration tests, and even do that in a secure way. You don't need to store that, or you don't need to access that data, because it just exists. You can also rerun events that happened in the past. You can also do what she calls step-up testing. So instead of just running all of that in CI-CD, you can make that part of your CI-CD chain. So you don't just do a D, a continuous deploy, but you say, do a continuous deploy to 1%, to a Canary, and then to 10%. And during those rollouts, you actually measure the impact it has on your infrastructure. Not only on your load, for instance, if your CPUs go up, or your RAM goes up, but also on the user. Do the users experience higher latency. And those kind of metrics actually become part of your CD process. Again, because it's all in code, right, and it's all in your infrastructure, so there's no magic there. What this allows you to do, and Julia Evans is writing a lot about that, is it actually forces you to think about SLOs. And SLOs here mean real SLOs. In my past, an SLO was typically something like somewhere, somewhere in an architecture document that would say, by the way, we need 99.9% uptime. And that was more or less the SLO. If you're lucky, the customer would define something like, we can only have the 95th percentile of requests needs to be at the user in like one second, something like this. Well now, with this deep level of observability, you can actually have way, way deeper metrics. You can say, I want my power users to see the request in the transaction screen faster than the request in the report screen. And you can actually measure that, and again put that into your pipeline. What that does, and that's a nice effect, Richard Cromwood wrote a pretty cool paper about that, complex social technical systems are hard, or containers will not fix your broken culture. I love that title. But that means it's just because you're using containers or Kubernetes, again, that doesn't really solve your application problem. What you want is you want to go in this direction and build an application that has defined metrics, that has a defined infrastructure, that actually solves the user problem. And suddenly you can start arguing about that, because you do have the metrics, you have the numbers to go to your stakeholders, to go to your business and say, well, I actually know what that costs. Yeah, you want that feature, that's cool. But do you know that costs us 100 CPUs every month? Those kind of arguments become a lot more interesting. And that's, for instance, something we also do internally at Google. So that's really, really nice. So what are the benefits? I found this really nice quote in a book by Molly Wright Stevenson from Ken Beck who once said, patterns are rearrangements of power in the design process. What that means is if you own the patterns, if you define how we code, if you define the building blocks of our application, then you define the product. And in the end, that's all we want to do. I'm not sure about you, but I'm a software engineer and I really like to be on the same level with my stakeholders and with my product manager, for instance. I really want to have a real argument about the business value. And that's what those patterns that are baked into, something like Istio's and to the architectural abilities allow you to do. You can start growing evolutionary. That's the reason why I just removed the brackets from the word micro. Because that's something I never liked about the word micro service, that it somehow prescribes the size. I've seen very big services that were just a really, really close business problem, but super complex. So let them be super complex and start refactoring them and splitting them over time. You can do all of those kind of things because you can define now the API and you can define which services are behind those API. Your consumer might not even realize that suddenly they are served by three or four micro services now because it's still going through the same API. You can start playing with semantics whether you want to use events or calls. Right now you always have to choose if you either have RPC models like you're calling something or whether you want to send an event to a bus. Well maybe you can try both and see what works better. Or you can integrate service catalogs, service brokers, external integrations. If you do that you can also bridge the boundary to functions. So you might have realized that I haven't mentioned functions yet in this talk. Why is it the case? In case you haven't heard about functions or sometimes called serverless functions are almost like a pass, almost like a higher level than a pass. It's something you call and you don't care about the deployment at all. So that's great. For instance for IoT use cases functions are really nice but in larger product developments you would typically see that you need some kind of control over your architecture. But you still want the advantage of functions. Well now you can combine that. There is actually function frameworks based on top of Kubernetes and on top of Istio that can give you that flexibility but you can combine it now in one architecture and you actually know which part is doing what. You don't need two separate worlds and change the team and suddenly have a red flag or a blue flag on your table whether you have four functions or against functions. Right now not all service measures support that in a really nice way but there is super interesting work to be done. You might have realized I'm using a lot of references in this application. That's really because so much is happening that sometimes it's just better to follow the people on Twitter or in blogs but there's definitely some interesting things happening. Cubeless DCOS RIF projects which are playing with this breaking up those kind of semantics and yeah if you want to have a look at that do. I just said in the slide before you can also start really observing behavior and JVD for instance she mentions a lot about that in her blog post about observability because you can really start looking at services on a trace level. I remember when like maybe five years ago we started including trace IDs in our requests and it was such a simple hack but suddenly you had such an insight into your system how your users are actually using it. This was something we never had before and that's something that really drives product decisions and drives reasoning about products a lot better than reasoning about some abstract SLAs. And if you have all of that last but not least and that's where I come back to my experience also in troubleshooting and you also can now reconcile your application metrics with your infrastructure metrics like loads and CPU and memory yeah because in the end someone is responsible for that right. There will always be some form of more obstacles people but they can now go to the application say hey excuse me you're using up a lot of RAM you're using up a lot of CPU and I know it's exactly you and I can even show you the piece in your code where that happens and by the way I know how to make it better. And that's ideally what you actually want and I can only say again this actually happens a lot internally. If you want to know more about that there is some really really cool talks by Liz Fong-Jones I can only recommend you to listening whatever she said in her life because there's really cool insights also how she shows internal tools in Google to do those kind of like drill downs and find misbehaving codes that maybe only misbehaves in a certain condition in a certain data center under a certain type of load that's really nice. Cool that's actually my main slide so let's wrap up here a little bit so what can we what can we do beyond that if we have other information we can start reasoning on an architectural level this is for instance a service graph that is produced inside Istio which is a service mesh based on Kubernetes that's really interesting that generates you the relationships of your services based on the traffic between them so not what you define somewhere or what someone says this process is sometimes called process mining that actually shows you how your system is used and that can be pretty surprising all of us know that you wonder like wow so many people are using the app what are they doing with it and then you actually look at the logs and they're using it for something completely different from what you have thought and those systems can now just give that to you for free and you even see that life you can even break that down by users and say what type of users are using which kind of interactions and with that your ops role or your we call it the SREs they become way more architects than ops and that are we like I used to be an architect and I had the problem that I always felt like I have no grip on the code because in old companies the architect is the guy on the PowerPoint and then a guy or girl on the PowerPoint and then yeah you unfortunately want to somehow you have to ask the coders and they just say oh I don't want to use that library and I don't want to do those connections sorry yeah sorry and now you actually have that handle now you're coming from this ops perspective from the architecture perspective and suddenly you can actually argue with the developers and really change it and really say this is how you want to change it and this goes a little bit back to what Philippa showed before in GitHub if you have access to all of that code you can actually change it so I'm going a little bit faster now here you see some more references charity majors is the one you want to look up for this kind of like observing events and mapping them to domain logic she has some awesome talks in that as well and just one last statement because you mentioned in the beginning AI becomes of course a natural collaborator now you have all of this data now obviously you can just route that into a statistical model there's always the example that Google say 40 percent of its energy and that's a lot of energy by having all the data center usage data fed into machine learning model that came back and suggested how to change the power configurations right so why not do that to your software why not see okay which services according each other and actually isn't there is there maybe users which always end up in this note why not expose that directly to the user instead of going via five steps that's just that's just churn so that's really really interesting so to summarize what you want to do is you want to do domain driven polyglots in multiple languages evolutionary mixed semantics development with all the standards that's proven to be the most productive way that developers work and by the way also the happiest way I find it great when I build something that actually has a product impact and I see how users like it you can suddenly have a real quality telemetry on your system Michael Feathers is doing a lot of that work for instance and they have some awesome analysis on tech depth so they can basically say which code is so old that it makes your whole application slow awesome talks really really funny how they drill down into who caused the tech depth and how that's fixed in service meshes you can focus on risk and user experience rather than infrastructure or process boundaries you don't have to just say I need 99.5% availability you can actually say like in an you know eventually consistent database you can just say oh well in this green that's how I want the application to behave and maybe in this one it should rather behave consistently and fast and in the other one not and US developers you can cool stuff like automated refactorings because of everything if your whole application is there well everyone can fork it and it can happen that the other team comes and say hey you didn't do a good job I take your whole application including the infrastructure including database everything and just deploy it on my own and then I fix the code for you and then I show you it is better and that's actually something that's really cool and fun and yeah it's in the end the true spirit of open source and with that thank you very much sorry I don't think we have time for questions sorry I was looking at the I'm at the little counter yeah sorry but it's actually five minutes behind yeah yeah sorry sorry that's why I reminded you yep um so well yeah um