 Who considers themselves a developer or a technical person again? All right, so we're in March 6th, so that's great. How many of you have heard about Khan? More than 12, 3%. I'll present two case studies, both here in Singapore and one very small and one looking massive. So I'll talk about what we're doing with Khan here and some very interesting stuff, particularly for the data, machine learning, and the financial markets. Handle jobs? Yeah, sure. 20 minutes, right? As long as you like. Okay, great. Hi guys, I'm the founder CEO of Khan. So originally the company was called Matchade, if you were maybe some of you are familiar with the original name. We've been operating for seven years in the largest ATM marketplace in the world. And so Khan recently we changed to Khan about a month ago, we regretted the company to Khan. And also we sold the old asset of ATM marketplace to another company. But we've been operating in the API space since 2009. And I'm traveling in Asia, in Tokyo, in Hong Kong, in Korea, and also Singapore. So we're doing our Asian tour to meet potential customers and Korean customers before I go back to San Francisco. So I'm born in Rome, I grew up in Italy. At 21 I moved to Silicon Valley to start the company, Matchade, in 2009. And then we raised obviously funding and we built the company and I never go back to Italy at the end. Sometimes your life is good food, but... So let's start. So the title is past, present, and future. But really we're going to talk about big market in general and where we see it evolving. We'll talk about Khan, what it is, the company and Khan open source. And also Khan Enterprise, which is the version for large organizations. So I'm feel free to ask me questions, feel free to stop me and if you get bored, yell at me. So first of all, how many of you used to work with these technologies? And how many of you are still working with these technologies? So it started in the early days it was typical. It was the message broker. Then you had VA system. You had to see the weather logic. You had micro competitors, mentality. Then you had new stuff as ESB players. Then you had API management in 2009, 2016. And so in each of those it kind of represents a different protocol coming in and out of the organization. And I think as we're both, this thing always would evolve. But the concept of brokering information between services they'll always exist. So as a company we want to focus on this information that is implied and have value to these corporations. So remember, it was pretty much heavyweight message brokers. ESB REST agreed to call when Twitter published their API in 2008 as a public API platform. And it finally kind of like break the line of now we're going on REST, developers are going on REST. And then after that it really got a big spark. That was before Twitter developed platform. They screwed it herself. But at that point it was really, really good. And then as we move forward to, you know, Microsoft is thinking you also have RPC, like Google RPC. How many of you use the GRPC or have heard of Google RPC? Only one. So, two. So Google RPC is built on top of its GDP2. Binary protocol allows you very fast communications. And I think in the Microsoft architecture you will hear more and more of Google RPC not just REST. So this is a, you know, it all being, it means also building architecture. So if you go all the way in the left, you have a website like eBay. It should 10 years ago. That was the only way to consume software, right? We built big 1.5 million lines of Java code running on a static server. Everybody was starting, I think it was moving. The consumer side was only a website, primarily interface. So if you didn't have, they need to have a dynamic software or a dynamic demand. As you move, like, you know, see jobs go some stage, 2008, launched the iPhone. After that, all the companies say, oh, we need to have a mobile strategy. But if you remember, 2010, it was mobile, mobile. Now nobody talks about mobile. Everybody takes for granted. But back then it was mobile, mobile, mobile. So it is like this interception when finally there has, there already start to be different channels where families, employees, partner, consumer consume software. It's not just a website. So you start to have mobile. You start to have iPads, tablets. And as you move, you have corporate platform. You start to have IoT, you know, Samsung fridge, Tesla cars. So it becomes way more dynamic, way, way more fragmented. As you move, now it's all about bots and messaging. You can hear a lot about virtual reality and AR, so new, new data sets. And it's the way we consume software is from one way to hundreds, right? As you go to AI, there will be a lot of new device that will consume software. So the software responds, it became very complicated. You know, it was first a paper of static, Java, you know, water file running there for ten years. And then something happened here. There was a big inflection point. This is Google Trends. On Microsoft, this is an integrator. And it had a big spike of demand. People start to look for this. You know why? It was a spike in 2014. If you get an e-relation, you know it. You want issues. You want. So, it's exactly around one year after Docker. And he won. You have to answer questions. Are you going to talk about that? Yes, I was. He's a real hood. Sorry. The teacher just ran to it. Sorry. The teacher for the right answer is teacher for courage. So, disruptor. So, here is a year after Docker. Docker allows the SRE dream of 2000 to start to be a reality by containers and all the applications. And so, the trend Microsoft has started to finally spike after them. And finally, this is the best thing we discover, is so API management has a technology, it wasn't really here. There were a lot of vendors, like five, seven vendors here. But nobody cares. They start to care as Microsoft services goes out. And they don't even care about all the API management thing. What they didn't care is about API gateways, which was only the proxy piece. Why? And now we tell you why. So, why? Because so in the past stage of the eBay era, you had, you know, starting applications that had to be to go mobile. And the fastest way to go mobile in 2010 was to wrap up an API on top and secure to everywhere the API management and mobile leading. And that was mainly public traffic. As the software gets taken out of Microsoft's architecture, the traffic moved from external to internal. Not only that, or the latency, it started to matters. So here, if you had that latency to public traffic, it was fine. Once you go internal, each request count, because had latency to your overall final applications result. So here's where it's very important. Being this too good is very important. Being cloud-vagnostic, platform-vagnostic is very important. Here you were running a server, probably a VM, for 10 years. Here you go multi-cloud strategy. You run a container. You use container administrator. There is a lot of different needs from there to here. But in this way, that's why eBay started to become mission critical. Before, it wasn't that much. It was something maybe done for the edge. Now, it's, of course, also the edge, but it's also internal. So there is a way from a mission critical. And in a way, it still looks a lot like going back to the team for ESV days. But this is like done right. So the main focus is internal traffic versus external and distributed and platform-vagnostic solution. So this is what Enterprise are doing. You're going from the chicken to chicken nuggets. Why keeping the chicken alive? So this is where Global 5000 are now. We need to do microservices. We're going to talk to everybody. Nobody knows how to start. They have a lot of different solutions on how to start a process. But reality is, there is so many technology out there and they change so quickly. But it's very hard to think the right one. Think about all the mainstream technology after containers. They did not exist five years ago. So there is any fighters from now who knows, you know, there will always be a fast-moving scenario. So most of the Enterprise, you know, highly-level scenario they are in this space and they need to keep the chicken alive. So one of the solutions is the high-speed-scoop strategy which turns into another big company. So the high-speed-scoop strategy, this is your application. And you scoop out different functions or different parts of the software. You move it to containers and you start to move it up. Now the code is maybe a gateway to keep serving it together. But this is one solution. There is also an atomic bomb strategy solution which basically banks like CD banks are doing where they start something from scratch. So I thought that R&D from scratch with the right technology, they don't try to spin out technology from the mobility. They just start everything from scratch. So we're going to focus about the high-speed-scoop strategy which is very valuable for Ikea gateways. So let's get into the action. So this is how your application looks. Write your e-commerce app. It's a very simple way of describing application. You have a lot of services. It's customer orders. You have a load balancer. And you have a client. So a very simplistic e-commerce sample. You start with a couple. So you start to scoop out you go into this cheap and cheap and big market strategy. You start to do the couple all different services. And so you keep going. And also you need to have logic to wrap those services. So you need a security. You need authentication. You need ring limiting. You need locking. You need transformation. And so you start to have a lot of teams. Basically, Ikea is but also running all this company logic. So you have a lot of code application. And at a massive organization you might have 10,000 engineers. 7,000 are building the core Ikea. And another 2,000 are just communicating logic to run Ikea. So it's a very big waste. So what Tom does for Ikea Gateway allows you to take out all this code application and move it to an abstraction layer. And then abstraction layer we route the right solution to the right service. So in the big order you're going to save a lot of teams running on common logic and you just pump things, things get over you. So this is like very mental change in how to design software. When you were running monolithics this one was only written one for the big monolithics. Now that you're running by your services you've got a thousand runs this is why we've written a thousand times. In many different languages this can be code, this can be Java, this can be Ruby, this can be another team, another part of the world in between. So it's even complex. This can be run on readiness, this can be run on VMware, this can be run on OpenStack. So all this application makes the need for an abstraction layer. And so all this round that we did that's why 888 started to have a spy and they didn't have that need before. Any questions here? I've got a question. So you've got the old architecture being low balance of punting. What's your best practice for resolving the punting of a pump? Are you doing some sort of DNS round for over the years? Yes. So this is a way you can have a lot of puns and you could engine X or WS old balance server. Yeah, there's a few ways to do that and that's a good practice. Yeah, round and drop in this. Oh no, I'm actually done my round and drop, I'm not proud of the sticker. All I've done is on the front end but of course whoever, whatever person likes this is best I guess. Yeah. Consider that also a pump has no balance. Yeah, that's exactly right. But for downstream services. Yes. And service discovery too. Okay. So this is another, I don't know if you can read it from far but it's another way, a simplistic way of re-describing the process that we've gone through in the last five, six slides. Right here you do put here a lot of logic and here you abstract the logic for centralize gateway. How many of you use an API gateway in some shape or form? One, two, three. So you are familiar with the process of questions? Good. So we always say the running microsoft running a city. You know you have a firefighter, a lease, transportation, subway. So it's a lot of standard of entities that needs to run together. So it also becomes very massive. You think about city of Singapore, you know how many, how many people live in or even I just came from Seoul to have been in Tokyo. So you can really see that as you run in price on microservices really like running a city. So a few words about Kong. So Kong started in 2015 as an open source project out of the API marketplace. Okay, then we went around. It was built on NGNX. It's still built on NGNX. And it has more than four million downloads. So I think now that's why we're about five or six. So it definitely has a global adoption since in over two years. It's extensible to planning. So you can extend through over 25 lines from us and there's 100 lines from the community. It's a 70 second latency, super fast. We script NGNX and do a Jeep which is the fastest scripting language in the world. So that's why there's this global NGNX and do a Jeep which is very rare but it's the best for performance. And as we're doing microservices speed is the number one feature. You can be slow. It's also platform and you'll stick to running everywhere like you run in NGNX everywhere. And it was just very fast and scalable in terms of like running and it took it on in under five minutes and just launch Kong and you know try to launch all the way through. So there's also a big community where we have 107 meetups around the world with 10,000 members there's over now eight contributors contributing to the Kong code base about 25 are from our company and obviously it's the most popular on GitHub. So coming back to distances this is actually the whole snapshot but they do 27 times so we have a phone home that you can disable but for the ones that are not disabled of them fireball you can see they grow this is for us one of our KPI from the same how Kong is doing because downloads it's a little bit of a panic because you have bullets you have CI the pool is from Docker it's very easy to get big numbers this is a key key component so you remember this one from my previous life this is how we can imagine it's been deployed for probably 20 years and that's the big gateway. Now we also support decentralized deployments which are very good for Kubernetes deployment you know that can run Google Container Engine AWS Container Engine distributed their own pods and so the real difference here is that Kong always had to pass the network request here it's seen as a sidecar and so by saying as a sidecar along the same microservice crushes you have the network latency and you have a much like this solution here you always have a network in the network and we always say you can't trust the network but actually the gate is running the networks here you have network latency quite far because Kong just run as a process to your microservices so this is a sidecar deployment this is an API gateway deployment also it's now taking physical values now called service mesh network but we are going to solution that can support so if you use Kubernetes and you have a lot of pods this is probably the way to go for internal traffic so easy to web and then if you have edge traffic you can still have a Kong at the edge for client for hard friends but for team to keep your organization you probably want to have a decentralized one how many of you use the use kubernetes or miso sphere or docker's war the miso sphere is not very popular not much so Kong identification is an example you can apply to your APIs with a cool request or with a the most popular 2.0 JWT for enterprise what kind of it is very important also very very complex on top of this you can always extend with custom apply so those are the ones that comes out in the box where you can keep a stand in the platform to custom plugins several less how many of you use landa the nodes so so several is an interesting one several less it's you know in a way it's a treat to containers because you don't need to care about containers of illustrator container monitoring container leading you just deploy a function of into landa you forget about it but it's not for every use case it's more like for you know added driven software but for mission a lot of software also that is very good for several less and so for us it's very important to support all these different several less platform right so this is open source open with resource open class which we support as well Google Cloud functions Azure functions so we would support all the most of the major several less platform so you can invoke all the functions directly from com on your several less infrastructure so for example in WS they have a WS gateway they can chain on top of WS but the WS gateway you know has about 250 second latency it's very different features it's consumption based you cannot extend and it's not multi cloud but the beauty is that you can always run everywhere it's not consumption based it's open source because with for 4 gateway then it doesn't have a specific cloud then specific solution and also this is open source like it's from IBM example of how do I apply it to com you this is you know go through this this is an API of com com has an API on top you can post you know you know the API the plugins this case is a request per hour and you you apply to the API that you want you can apply to all your APIs one specific set of APIs to a group of APIs to consumers to single consumers the group of the dead consumers it's very easy and it's foreground on top of it you also have custom plugins that you can always write and extend com and put more plugins in the life cycle of the request by important it's you know we are fanatic about the utility we also fanatic about simple to use so we have over 50 different ways to deploy it's very I think I think it's it's very important for developers to just click what they want and get it out and run as soon as possible without thinking about it so once you think about those boxes as an option as an information where you you're credentialed and then you send out the region and the automatic speed it up come in your WS region with a gateway for high ability and database run also Google Cloud platform if you want to talk a little bit more later Google Cloud Launcher tick a bottom and send out come for you but I think most of our in terms of downloads or use it is that like a if you want to run a bare metal operating system. And between them, operating is also the possible going between containers orchestrators. And then we have source, where you want to download it from source and you want to add it to your own version of COM. So that was the community edition at large. There's also an enterprise that we launched a month ago. Enterprise is part of our enterprise package as a COM company. We have customer meeting across the board with different verticals. So it's not just specific to finance or specifically to e-commerce. It's pretty much global. And it's pretty much the security in terms of geographic region, but also verticals. So there is cars manufacturers for their team projects, media, artwork, security, finance, government agencies. It's pretty much big and broad, which allows us to be valuable for a lot of different folks. It's not specific at all for all of you and very few. Honestly, we raised over $20 million from Kirwan Investor in Silicon Valley, like Alistair Nordwitz, Jeff Bezos from Amazon, Eric Schmidt from Google. Most of the money we actually didn't want them. They just came in because COM was growing so fast that we decided to accelerate adoption and enterprise development with more engineers and also machines, also global coverage at the company level. So we have now focused all over the world for have a global coverage in both supports. And I will talk more on this later, but we have a great success engineering team. So Conqueror Price is divided by six major groups. The first group is Graphic and Interface. ConConfinity comes with API and CLI. Conenter Price has an enterprise group. There's also security parts, portal, analytics, stability, and also support and customer success. So those are like six areas that makes ConCon Community better suited for large organizations. So the first one is Admin Quik. It's built on top of Conqueror Price and allows you to manage Conqueror Price from a Graphic and Interface. You can still use the API and CLI for programmatically accessing the CLI, but also for easy to use simplicity, you get a Graphic and Interface as well. It's built in JavaScript. I don't know how many of you use the U.GS. It's just a very good recent JavaScript framework. And the beauty of it is that it's not a separate application or a separate software that you need to run and manage. It's shipped directly on port 8002. So it's served and directed by Con, and Con, in fact, will act as a backend. And this is just front-end JavaScript code that reads from the Admin API. So there's not separate code, there's not other spaghetti, pasta code all over the place. It's very simple and clean. And then port 8001 is actually the Admin API. Securities, I think, is there for major groups why people move to enterprise. Open AD Connect is very important for enterprise folks, L-Touch as well. So those are my commercial plugins that are available in the enterprise package. Open AD Connect, so that took us around four months to build the right Open AD. There's so many edge cases, so many variables, it's very complicated. I mean, we open AD, you know, it's limitless. We support most of the new spaces, most of the popular Open AD cloud providers. So it's pretty much comprehensive and it's probably one of the most valuable feature in the enterprise edition. All to introspection is another one. So if you have a running cloud-to-server from 2011, you can connect Con to that cloud-to-server and you can sync with that cloud-to-server. Role-based and access control, so as a company, you can decide, okay, this team, can only view APIs, this team can only edit APIs, this team can delete APIs. You've got role-based and access control, everybody that has access to admin API, they can delete APIs, they can throw down the cluster, they can do pretty much every value. So role-based access control allows you to manage your team and also you're about in logs so you know what's happening. Right, about the portal, obviously, you know, one of the, API management is having to do something like three things, gateway, portal analytics, this is one of the three foundational things, allows you to publish APIs on board developers, manage developers, you can customize on your portal, also you can support OpenAPI spec, how am I going to do this, Swagger? So how am I going to use a Raml or Role-based for you? So OpenAPI, you know, Swagger changed to OpenAPI, so I think OpenAPI will miss the standard. Raml and Role-based will make some things out as this is becoming the department for API suspect. Now, it's a little bit unfortunate that, you know, as we're going to 2020, you know, developers and e-months, we still have to write a spec, where the gateway has all the information either where you can actually help to generate spec, right? So I think eventually it will be an underlying technology, but we won't interface with them directly, the gateway will run, we will out-generate those spec for you. But the developer portal is kind of like the first step for developers to publish limitations, it's not customizable, so you can have a custom CMS and developers.yourcompany.com. So it's very good for Edge, it's very good for partners, consumption is very good for public and game consumption. It's also getting a use for internal, as you have a lot of internal APIs, it's also very useful for internal developers. So I should find it because we're going back to NVIDIA marketplace, which was to be built for five years, but for the general use of it. Bydots and NVIDIA, so bydots is a terminology that we came inside the company and now we were so distributed. It's really focused on the help of Conn itself, so the cluster, the caching, how Conn is doing, so it's pretty much specific for Conn. NVIDIA says, you know, it's been this thing forever for seven years, it allows you to have business metrics on top of the API traffic. Now, a lot of folks now use a login system, right? Like DataDog, NewReady, Splendor, maybe Elasticsearch, ELK, which one is the one you like the most? Elasticsearch, yeah, the old ELK stack. Yeah, Kibana and everything. So, there is APIs, lots of APIs that can import those data into your favorite login system, and you can visualize those data from Conn into your login system. So, this is like nice to have, but of course it's always a mix. In a way, it's very simple metrics from your analytics, but as you want tracing, logging, you always suggest you just connect Conn and send your HTTP, TCP, UDP logs into your favorite login system, because that's probably where a company already understands their design internally. It's the best way to push data. Pgp is on, and I've signed the Pgp. Yes, it belongs to Pgp, please. Yes. So, this is another important thing, all that I mean, three. So, you can see what's the project happening, what's going on at the other total layer, and you can publish those information to the consumer, so they know how much is left in terms of quota, right? So, think about it like, what do you think happens for a business intelligence platform for the runs on top of Conn? This, too, is built on JavaScript in UGS. This is a web-stopping connection on the chart, so they're pretty fast, almost in real time. You have five-second latency. You can see, you know, API requests by consumers, by API. You can see tracing, round-trip latency, proxy latency. So, also caching. It's around 10 or 15 different metrics they can track on. Scalability, this is the last part of their feature set. Obviously caching is very important to cache response at the edge. I'm pretty sure you, if you have a multi-data-centered deployment, you want to cache response. Backups. So, if your cloud circles down, you can back up, you can input and export to the config. This also is very valuable if you use Kubernetes, so you can use the cloud to config and you can push all the cloud to config to Conn and Conn will spin it up for you. And, also, enterprise re-limiting for multi-data-centered deployments where you can see re-limiting across different regions. That's very hard in this good system to have a very consistent re-limiting across different regions, especially because there is a network latency. So, if you want to un-read even in a longer cluster, sync with Conn cluster in Virginia, there is a network latency over the oceans of around 800 milliseconds. So, you always have to consider those hedge and you need to keep the two clusters in sync. So, the enterprise re-limiting allows if you have this consistency, if you go multi-region. If you have a single-region, probably the open source, the community re-limiting is really strong enough. And then, the last part package, some more feature, this is really human's power. We don't have support. So, we don't have support. We have customer success engineers, which is way more expensive with the company. We invest way more. But, the client is much better because it's proactive, it's bi-weekly or bi-weekly meetings. It's not passive or open-artic and someone answered, or how can I help you. It's really strategic. Some of them report to the office of the CTO, like the solution architects. So, it's a full package, which is very, very proactive and the customers love it. So, it's more expensive, but it's a strategy that we decided to do on day one, investing in success engineers, not in support, especially for their early days, right? Maybe, in the future you can segment different kinds of support years, but in the early days, we do have very high quality and supporting infrastructure and APU company in the successful. We also have professional service. It's very small, I think it's less than 5% of the business. We actually love to give professional services to partners. Professional services may be planning development. So, if you want to write custom plugins, you don't have time, you can call the partner or you can call us and you can write custom plugins for you to connect to the system. We now write hosting in the world, but next year you can write some different programming languages like Java. But the only use of professional services is truly writing custom plugins. All the other things they have, automatically including enterprise subscriptions. We have a coverage all over the world, Singapore, London, Tokyo, in San Francisco, and in Boston. And then, this is the last piece of how it works. So, it's pretty simple. It's probably the most simple business model in the market of APIs. It's not consumption based. It's not consumption based because it's very hard to predict and get a question. It's not fair to charge for consumption when you're running the software on your own. This is not on cloud. This is called on bread. So, you run on your own infrastructure. So, you can run on as many nodes as you want, as many CPUs as you want, a limited request of limited APIs. It won't matter. The only unit of value that matters are these, so how many people manage common enterprise? Usually it's five, 10, 26 a year. Usually it's the API team or the API teams. It's about five or 10 people. And that's the subscription. As I said, it's connected to the, those are the ones that can use enterprise features like portal. As I have you, not as a consumer, a consumer always unlimited. Are the ones that you have access to our success engineers, to our Slack channel for real time supports. So, it's very simple and it's very customer friendly. Because you don't have to think about, oh, now I run on another machine and I need to pay 10 grand more. I need to, I need to plan different regions, I need to pay more. Oh, I only got a request. I only bought a mini request per day. Now I'm doing five mini requests per day. It's Black Fridays. I didn't predict the spike. So Microsoft architecture is very, very hard to predict spikes. And because of that it just focus on the people. At the end it's all about the people. So, the people that use the software are the most important thing. It's all flat and there are two different support options, standard for Monday, Friday and premier 24-7, down to 15 minutes, SLA and global and global cooperation. They all come with a solution that included for strategic thinking and not just technical based support. So the key thing is really, really stability. That's why we decided to go with this model. So if you can predict and you can budget, you know, global 5,000, the leadership can always budget and easily without thinking always, okay, how much is going to cost in six months or in nine months? This is for common crap. For common, Nessia will also have cloud, which is managed cloud. So we manage also the ops for you. That is a different model. It's then there is infrastructure in it and then maybe conceptual base makes sense. But for on-prem, that's making sense. Any questions for now? Just going to talk about consistency. Is there a data storage side? Yeah, so, that's a good question. Okay, so let's take this one, right? So those are common nodes scaling horizontally. And this is the database that it's related to common where you store all the config. So like rate limiting, right? The database, it's either Postgres as well or Casandra. We use Postgres, if you need to apply common single data center, you just use common with Postgres, easy. If you go to multi data center deployments, then you use Casandra. So all the cluster can point to the same Casandra cluster, but you get a different Casandra. So it's supported both Postgres and Casandra. And for rate limiting, we also have rates. So you can choose one of those policies. But the data storage is just configuration. Yeah. Talk about it. Yes. So why is the way you mentioned about consistency? In here? Yeah, for multi-regional setups, that's why we use Casandra. Casandra is famous for consistency, right? We can't scale Postgres on different data centers. Right? Because there's no need for multi-cluster deployments. So in this case, because it supports Casandra and it's already sending out, it's an availability model of running rates for having consistency. Well, you have a single data centers, multi-data centers, half-data centers. Yes. Definitely, you know, this is a very sophisticated feature. I think you need to have two or three data centers. They all talk to each other. That's when you want to have a more heavyweight company. But think about it. The need of a database eventually will disappear as we only can go back into the clouded configuration. So when you push your Kubernetes, there is a config on GitHub where you just need for export config and you drop into com and you can then get set up coming through the clouded configurations. So I think eventually, the need of a database, you will go away over time, especially as a moving to service mesh network with a database is too heavy. But for now, it's the best way to run a high level system is to use high level technology like Redis and Castangra for storing the config. Oh, that's it. Thank you. So now you have two different trends, one is the open source, one is enterprise. Yeah. Then we will be the future of life on them or the open source one. Yes. So the company has around, so we were 15 people last year who are now approaching 70. So we're five X million enterprise customers. But of course, open source is the core, is the foundation. So in the engineering team, half of the engineers work for enterprise and half of the engineers work for community. So the roadblocks keeps growing both ways. For example, the next community version, the last community version, we added load balancing, then I will balancing server discovery, connected console or SVN record. The next community version coming December, we add things like health checks, active and passive. They're very important feature that for some point you need to pay for. If you run engine X, you need to buy engine X plus to get health checks and to get an upload balancing. You come, you get it for free in the open source version. So every two months, we always have a major community release with a lot of interesting things. I think enterprise, and actually the engineering team of enterprise will grow bigger, has a lot of features, security, machine learning. But the open source version is of course the core, right? Yep. And so for us, it's very important. We are open core, we're born open source, the community is very important. So we know that you can't just grow enterprise and keep the community like this because actually you don't have, you don't have the support from them. That's great.