 Good afternoon, everyone. My name is Romans Foschowski and I work as a VP for Cloud Foundry Services at GrapeUp. Today, I would like to tell you a bit more about our work at GrapeUp and what we managed to do with Cloud Foundry, platform in terms of its routing features and capabilities. So it all started several months ago when we started to take a look at the Cloud Foundry platform and then its possibilities in terms of routing and we were wondering how to extend its capabilities in in terms of adding new types of traffic it can support and also new types of applications it could support. So today I would like to share with you and show you the whole path we went through starting from the original idea and how it evolved into what we have right now, what we've built right now, which is extensible architecture for custom routers capable of plugging in actually any custom TCP or UDP router. And I will also show you what are the next steps for that and where we want to bring that and then what's the idea to move forward. Okay, but before we get to the TCP and UDP stuff, let me just give you a quick overview on what actually GrapeUp company is. So we are a software and then consultancy company. We provide, we specialize in designing and building cloud native applications in the first place, but aside from providing application development services, we also provide generic software related services around Cloud Foundry. Like installation, consultancy, configuration, set up, also some customizations. We've been operating on the global market for over 10 years now, so we have a quite a vast experience with with software development. We were focusing so far in the North American market, but since last year we are also trying to build our awareness and then presence here in Europe. As I mentioned, I'm right now responsible for the Cloud Foundry service team in GrapeUp, but also for overall for technology. So what technology company uses, where we want to offer that, the overall technology vision. Currently our major focus is the Cloud Foundry platform, but not only Cloud Foundry, but also the whole ecosystem of tools and technologies that is built around it. Okay, so enough about Grape. Let's get back to points. So this is the question that we ask ourselves and we are asked by our customers very often. So when we talked about Cloud Foundry, what can it be used for and what types of systems, applications, solutions? What can you run on it? And well, the answer could be you could run anything because you have all those possibilities. But of course there are certain types of applications that can run out of the box on top of the platform without much effort. And in the first place, these are web applications, right? So these are pretty much technology agnostic. So the technology doesn't really matter here. It could be Java stack, the applications based on the Java stack like Spring Apps could be JavaScript based applications, Ruby, Python. We have a whole bunch of different technologies covered by different types of build packs available in the platform. And of course we have Diego support. Sorry, we have Docker support in Diego. So actually, we could really run anything, right? We could just pack anything we want into a Docker image. CF push it and then it just works. But the limitation of web applications is really they support only HTTP traffic, which is quite natural because well, they're usually interactive. So they use it can interact with them via the web browser and the web browser communicates over HTTP. So there's of course a bunch of non-HTTP applications which we would like to support. You might have seen yesterday's talk from Shannon Conn from the TCP routing team, from the routing team about recent updates to the TCP routing in Cloud Foundry. So he was showing those use cases, non-HTPS cases. First of all, different various IoT protocols like MQTT or MAQP and other. He talked about legacy workloads and also some non-persistent TCP applications, TCP services. So these are of course like extend capabilities of the platform as compared to just the web applications. But these are definitely not all non-HTTP applications we would like to support. So we have more. First of all, there are more IoT protocols that we would like to support, like MQTT for sensor networks or the co-op communication. We could think about some media related solutions, media streaming or data streaming applications based on RTP support and RTP transport. We could think also about maybe gaming servers where the UDP transport is, well, primarily the basic type of transport use. Or perhaps some solutions based on the SIP protocol, again where it uses the UDP transport rather than the TCP. So to support all of them, we need this. So the conclusion from our side was simple. Well, let's try to build it and add it to Cloud Foundry. So at GrapeUp, we've been recently shifting from generic software development to more cloud native application development. But at the same time, we have a vast experience and a big track record with building different types of solutions as well. For instance, VoIP and unified communication solutions. So we're wondering, can we somehow combine those two areas of expertise and use them to actually introduce UDP support into Cloud Foundry? Okay, so let's stop for a while and do a quick review on how routing actually works in Cloud Foundry. As you might know, we have right now available two types of routing. We have HTTP routing and TCP routing. So the first one is quite simple. We have a GoRouter component, which is a custom HTTP router implementation, specifically for Cloud Foundry. It's written in Go, that's why it's called GoRouter. And it supports routing just the HTTP traffic and only on two ports. 80, port 84 HTTP and port 443 for HTTPS. Internally, this diagram shows very simple architecture of the Cloud Foundry internals. So we have the whole Diego runtime. And additionally, to the GoRouter, we have the Routemeter component, which is responsible for monitoring what's monitoring desired and actual RRP states in Diego. So to simplify that, the Routemeter checks what routes do we want in the platform. And if it detects some changes, it notifies GoRouter with the updated routing table so that GoRouter can actually reconfigure itself and support new routes. For TCP routing, the current implementation, it's quite similar. Yeah, again, Shannon was showing this very similar slide yesterday. So we have very similar architecture here. We have TCP emitter, which serves the same role. We have a TCP router, which is actual component responsible for reconfiguring underlying HE proxy, which actually does the TCP routing stuff. But we have additional component called RoutingAPI, which is like externalized subsystem to store and handle all routing tables and all routing definitions, routing rules. So the TCP emitter right now still monitors what happens in BBS. And once it detects required change in the routing rules, it updates entries in the routing API. And on the other hand side, TCP router subscribes to certain events from the routing API and then, of course, reconfigures the underlying HE proxy. So TCP routing is purely port-based, where, of course, it's a layer 4 router, so we can tell it that, okay, route all TCP traffic coming in on ports, let's say 5,000, to my application, right? Okay, so this was a very quick overview of what we have right now. So the missing piece of the puzzle is, of course, UDP routing. It's not there yet, and back then when we started just thinking about it, there were no plans so far to add it. So we thought, okay, let's try to build it somehow. So first of all, we wanted to define the goal, so what we actually want to build, which seems reasonable approach. We wanted it to be quite homogeneous so that the user has a unified experience with using all the different types of routing. So for instance, we thought that we should have separate components for each type of traffic, and that the actual route definition should be quite similar. And the same should be on the CLI side, so that we can either do a CF push my app, which actually creates a standard HTTP route, but we should be able to very easily define, okay, for this application, I might want a TCP type of route with this part, or the UDP type of route for this part. So with that in mind, it looked like a very high level plan, but we thought, okay, maybe it's enough. We decided to give it a try and build a prototype, and we did it the agile way, very agile. Maybe a liking speed. So what was the outcome? We built the first prototype, we based it directly on the existing, back then existing, routing release components. So what we had in the TCP routing, but we modified them slightly so that it actually can handle additional configuration, and it understands that we do not only expose a certain port, but also add with a certain protocol. We also extended it so that it now uses two types of proxies for routing the actual traffic, so HA proxy for HTTP, and a new type of proxy pen for TCP and UDP. And the internal architecture looks more or less like that, so you may see that on the right-hand side there is still a similar architecture, where we have Diego, we have the emitter component, which is now called CF emitter instead of TCP emitter, so that it's because it actually supports not only TCP traffic anymore. We still have routing API, which serves the same role, but on the left-hand side we have instead of TCP router, we have a CF router component, which was again extended to support UDP, and with this additional load balancer API layer, which was actually kind of an adapter layer to support a different type of proxies. And here we have a pen proxy and HA proxy with additional interface layers on top of them to be able to communicate with the load balancer. So the whole flow was exactly the same as in the current TCP routing. So CF emitter monitors Diego, it detects route changes, it updates entries in the routing API server, and CF router subscribes to routing API, and once it detects or gets new updates, it updates reconfigures appropriate proxy via the CF pen API or CF HA proxy API, those additional adapter layers. It worked, it worked. But unfortunately, just like with many R&D projects, where we are trying to build something very rapidly, and we are learning on our mistakes, back then the TCP routing was not really documented, so we had to reverse engineer it to see how it works. Yeah, and then there comes a moment where you start to see that everything we've just built is not as great as you thought it was, and it's really not as good idea as it initially was. And as you take a closer look, you notice that, well, all you've built is just this. So what's left in such a situation? Well, you need to clean it up, right? So we cleaned it up, we threw it away, we did a retro, and we started over, right? So having in mind all the things we did wrong, we thought, okay, maybe let's not do this as agile as before, and think a bit more what actually we want to build. Okay, so we try to prepare or think about some more exact requirements or detailed requirements for how this should work, so that it really makes more sense and it's more reasonable. So first of all, what we want to achieve is to be flexible. We want to support any type of TCP or UDP protocol, either generic traffic routing or, for instance, some higher level protocols like MQTT or co-op or anything else. Additionally, we would like it to be extensible or pluggable so that you can take any proxy, like HEProxy or PAN or NGX or even your own custom implementation and just plug it into the architecture. Of course, we want it to be scalable so that it can support a high volume traffic, extensible, again, to be able to support custom implementations of proxies or routers, and easy to deploy because the prototype was quite difficult to set up, it required some manual, lots of manual steps and some tinkering to actually make it work. So, well, eventually we would like it to be deployable with Bosch and automated. So with all that in mind, we've come up with a new architecture, which again is slightly similar, but different in a way that we externalize the whole router component to be more self-contained. So, again, the CF emitter and routing API still serve the same role, but on the left-hand side we have the CF router, which is like more generic from one-hand side, but we have this triple X router, which is the actual adoption layer allowing to plug in any router executable, which is like 8G proxy or pan or engines or whatever else. Of course, this is just a template, so eventually we'll have CF router and then, for instance, 8G proxy router adapter, and then at the very bottom 8G proxy. And it can route something more than just triple X traffic. So, yeah, with that we thought, okay, so now we have a complete set, right, because we can support TCP and we can support UDP with pan at the bottom. And additionally, we could even build our own HTTP routing and replace the go router, which is built into CloudHoundry, which is not necessarily required because, well, go router works well, but the limitation it has is it's only the fixed port, the port 80 and N443, so with a custom HTTP router we could possibly route or define a route with any port and input port. So, in general, it seems really like a complete set to support all those different types of non-HTTP use cases. For instance, yeah, the cope, the protocols, all the applications I talked in the beginning. But, to be honest, we need to answer the question, is it really complete right now? I'm sure that you're also wondering, okay, so does it mean that we right now have a production grade UDP support in CloudHoundry? Well, that's not entirely true because this is still a work in progress. What we want to do next with that? Well, first of all, it's available in our Bitbucket account, so you can just go there and then see the code. Again, we are still working heavily on that, so what we're trying to do right now to polish it a little bit so that it's more like plug-and-play type of thing, because right now, still, it's not fully automated. Preparing Bosch releases is a pain. Yeah, the guys from team can prove that. And, yeah, what's also important, we originally forked the code base from open source CloudHoundry version 238, but to actually implement the UDP support, we modified it quite heavily. So, right now, the version that's available, it's 243, as far as I remember. I think it was released last week. And so the version we have with the UDP support is not entirely compatible with the newest version. So this is, again, something we need to figure out how to easily merge the new upcoming changes from the mainstream to this solution. Yeah, but hopefully, we'll be able to integrate our changes into the open source version in the near future. Or at least, if not integrated entirely, we would definitely like to share it with the community and see also feedback from the community, see some opinions if it's really something that you think is required in the platform. My feeling is that it is needed because it will definitely extend the capabilities of the platform itself. So we can now see that we are building more and more types of applications, types of solutions, and we are putting them into CloudHoundry. And with this additional type of traffic supported, our possibilities here are even bigger. Okay, so I think that's it. That I would like to thank you for your attention. And if you have any questions, I would be glad to answer them. Now I think we have a few more minutes left. Okay. Well, we thought that it would be easier to actually implement to use pan proxy for both TCP and UDP routing because they're quite similar. And the actual interfaces on the implementation side, they were just easier to implement. But there are no whatsoever reasons that you cannot use HAProxy for TCP in terms of, I know, performance or things like that. Because yeah, yeah, exactly. Any other? Yes. Actually, we've modified quite a bit of CloudHoundry, including CloudController, but also a whole bunch of components from the Diego runtime itself. So yeah, that's what I mentioned. That's those are quite heavy modifications. That's why it's still not kind of a plug and play solution, but we'll try to make it more less coupled with the actual CloudHoundry architecture. So right now, yeah, I think we've modified more than half of Diego components and CloudController as well. So yeah, these are really, really heavy changes in the whole platform. Yes. Not yet. I've talked to Shannon and yeah, right now I think as we shared the code base, I would be more than glad to actually be able to somehow contribute it to the mainstream. So yeah, I will definitely try to connect with the team and share this work with them to be able to integrate it. Okay. So I think that's it for now. Thank you very much. And if you would like to