 Good afternoon. My name is Atul Kshar Sagar. I'm an engineer at GE Software. I've been working with Cloud Foundry's Diego team as a full-time member for past five to six months. And I'll be talking about how GE Software, in collaboration with Pivotal, is trying to add non-HTTP routing capability to Cloud Foundry. Now, this is going to enable routing of support for IoT protocols in Cloud Foundry, as well as it is going to enable support for any layer 7 protocol that's going to work on TCP. So why is GE interested in IoT at all? By show of hands, how many of you have heard about industrial internet? Wow, everyone has heard about it. That's nice. So now you all know that industrial internet is internet of really big things. It's also internet of really important things. Now, I'm going to walk you through a very typical use case that GE has. And I'm going to show you why IoT protocols are important. And I'll also try to show you how Cloud Foundry fits into this entire scheme of things. So now, most of you would know that GE is a big industrial company. It has its presence in many industries. It's there in aviation, transportation, health care, oil and gas, power generation, many more. So the example and the use case that I'm going to show you today is from our power generation business. Now, this is our typical ITOT deployment and topology that you are seeing here. On the bottom left, what you're seeing is our remote stations where you have those big turbines that are generating electricity. The top left, you see the control centers, which are going to manage these remote stations. And on the bottom right is our data center, which has our industrial apps that run on the cloud platform for industrial internet. Now, if you notice there, in the remote stations, you have those PLCs with a little P on that. That is our predicts machine, which is the embedded part of our software. This predicts machine software talks to the turbines and controls the operations of the turbine using a variety of OT protocols, like OPC UA, Modbus, MT Connect, Profinet, there are many more. If you look at diagram, you'll also see that there are a bunch of sensors which are fitted on those turbines. And they are going to measure various operational parameters, like what is the level, pressure, flow. And that data will be sent to predicts machine or IoT protocol like DDS. This data in the remote station is stored in a time series database, which is a proprietary time series database of GE called Historian. And that data is then sent over satellite uplink to our control centers. This control center is your typical corporate enterprise network, and you have a bunch of apps that are running there. These apps run edge analytics so that that data could be shown to the field engineers, to the experts who are in control center, who can actually manage and take real-time actions. Now, you see there is a little P over there in the control center, which is our predicts gateway. This predicts gateway is going to get the data, compress it, filter it, and send it over when to the industrial apps that are running in our data center. These industrial apps are going to run predictive analytics, do advanced analytics, and give you predictive insights, and then all big data stuff that's going to happen there on. So this predicts gateway, as well as a bunch of apps that are running in control center, are going to send the data to the apps running in the data center over multiple protocols. HTTP, obviously. DDS, S2S, which is our proprietary binary protocol, and MQTT, AMQP, and a bunch of other things. Now, let's zoom in into our data center. And this is where Cloud Foundry comes into picture. Our cloud platform for industrial internet, it is called predicts, is based on Cloud Foundry. Now, we have a bunch of apps and industrial microservices that run on Cloud Foundry. Now, the key part here is we have to get the data either from our predicts gateways, either from the apps, sensors, or devices, or to the apps that are running on Cloud Foundry. Now, you get a picture of what we have in terms of our protocol landscape from G's point of view. Let's take a quick look at how the protocol landscape plays out in an regular IoT use case. So Eclipse Foundation recently conducted a survey of IoT developers. And they found out what are the messaging protocols that you use in your IoT solutions? There are two things you should notice about this particular survey. First, obviously it's only not HTTP. There are many more protocols. MQTT is there, co-app, XMPP, DDS, and many more. And if you look real close, it adds up to more than 100%. Why is that? Any guesses? People use more than one protocol in their IoT solutions. It's just not one. There are more than one. And that's the real point I'm trying to make here. World is multi-dimensional. We need multiple protocols. It's not only single protocol that you need. And that is in where we need support for multiple protocols in Cloud Foundry. Now, in an IoT use case, HTTP has certain drawbacks. HTTP has overheads, large overheads, doesn't offer QoS, not very well suited for large payloads, doesn't offer sticky sessions. So those are various disadvantages if you consider HTTP from an IoT use case. So now that we know for IoT, you have multiple other protocols in play, what happens today when you have Cloud Foundry? Now, you have this bunch of apps we have deployed in Cloud Foundry. And let's say these are 12-factor apps, web apps, if you will. And your web clients or browsers will happily talk to your apps, because GoRouter understands HTTP very well. If you want to talk to app one, GoRouter can do it. It knows how to forward the request to your app. And everyone's happy. Now, let's try to see what happens in the IoT use case. In comes our sensor device gateway. It doesn't want to talk HTTP. It wants to talk maybe MQTT. GoRouter has no clue. What do I do with this? Well, I can't do anything. Connection drops. Everyone's unhappy. So now, if you have to use Cloud Foundry, you have to use it in an IoT use case, what do you do? There's an even no answer to existing. What is the workaround? CF services. That's what people do. You will host some part of the system or some part of your app outside of Cloud Foundry. That is the part that's going to receive the data over non-HTTP. And then you use CFS service broker APIs to expose service endpoint into Cloud Foundry. Now, the apps will bind to that service endpoint, get the data. All works well. But it turns out there are certain drawbacks. First up, you are adding an additional hop to get the data. So there is an additional latency. Second, you have to manage that part of your service or app outside of CF. So you don't get all those things that CF is offering you. It's not going to manage the health of your apps. It's not going to move your apps if your apps crash. It's not able to scale. So you have to do that by yourself. But most importantly, it's not going to work for all the protocols. MQTT here, what I have shown here, it will work very well because it's a broker-based protocol. You can have your broker outside of CF and apps as client to that broker and everything works fine. But what about peer-to-peer protocols like DDS? Want to put your apps out there? Then what's Cloud Foundry for? It's not going to work. So it will be great if Cloud Foundry is going to have this capability to route non-HTTP traffic directly to the apps. If you have that, you will take Cloud Foundry one step closer to make the platform of choice for IoT use case. We want that. So now that we know for GE, definitely IoT is something that is required. And this is a gap that needs to be filled. Let's see how we can add this capability. And as any good engineering team, we came up with a couple of options. Let's have those options. Let's see what is involved in that. And let's pick the right one that's right at this point in time. So I'm going to walk you through those two options, give you a brief overview, and then we'll take a deep dive into the approach or option that we have decided to go with. The first option that we considered is layer 7 router. Existing HTTP or Go Router that we have is layer 7 router. So we thought, that's nice. We can just extend that. Might as well put a routing framework around it so that you could have pluggable layer 7 protocol specific proxies. So you could have a proxy for MQTT. You could have a proxy for EDS. You could have a proxy for XMPP, and so on and so forth. And this routing framework now will route the traffic that it receives on its external port to the particular proxy. So let's say it receives traffic on port 1883. It routes it to MQTT. So that routing could happen based on what is the well-known port for that particular protocol. Now, it's the responsibility of that proxy to actually route the traffic to the appropriate instance of the app. And how it is going to do it, I'm going to talk about it a little bit later. And that's where the more challenges are. And we'll see what those challenges are. So now, if you look at this approach, the advantages of this approach are pretty clear. Oh, you are going to use standard ports. It's very natural to use. Like you're using HTTP, AT, and 443. You'll be able to use MQTT on 1883. You'll be able to use XMPP on 5222. It's very scalable. There is no limit on the number of apps that can receive this non-HTTP traffic. It's very extensible. You can add support for whatever protocol you want by developing a proxy, plugging it in. You can develop some routing services. Like we are going to develop for Go Router. We have this router services that will be developed. And you can implement any cross-cutting concerns at that layer 7 protocol proxy. On the flip side, you'll see that you need to develop protocol proxies for each individual protocol that you need support for. So that is obviously an additional overhead and more complexity. But more importantly, and this is where we'll dig a little deeper into how we can have this layer 7 proxies route the traffic to particular app instances, it's very difficult for this proxies to know where it should route that particular traffic to. It will end up using some non-standard ways or some hacks in order to achieve that. So let's spend some time on this to understand what's involved and why is it so complex. In case of HTTP, it's relatively easier because HTTP specs mandate that you need to have the host header in your HTTP request. What is this host header? This host header is your route. It's your app1.cfapp.com. That's going to come in your HTTP headers. So Go Router can look at this host header, look at its routing table, and route the traffic to particular app instance. Let's take a look at the headers from one of the IoT protocols. I'm picking on MQTET here because that's more popular. That's what we saw in Eclipse Foundation survey. MQTET has two byte fixed header. First byte is message type and a bunch of flags. Second byte is remaining length. It's followed by a bunch of variable headers which are different based on what message type it is. The most relevant message type for this discussion is connect message type, which is the first message that will be sent by the client to the broker in order to connect to the broker. So here, the variable headers would be protocol name, version, connect flags, and keep alive timer. After this is going to be the payload. So if you look at these headers, there is nothing similar to host header in HTTP. So there is no way for MQTET proxy to determine what is the logical target just based on the standard protocol headers. So now it has two options. It can add its own custom headers. An expected client will have that custom headers in it. If you do that, you need client-side updates. Existing clients are not going to send that header. Can you do that in IoT? Maybe, maybe not. It's not probably practical to go and update hundreds and thousands of your devices with client-side updates. So what do you do? Then you'll have to come up with some non-standard hacks and some tricks, depending on who you ask. And the case in point here is SSH proxy. The SSH support that we are adding to Diego, how it is achieving the routing of SSH connections to appropriate app instance. What it does today is that it expects the username to be the process good and index. And then it knows, yeah, this is the process good. This is the index. I know I can look up in the routing table. I know what host IP and port it is, and then I can route it. So now the clients need to provide the process good and index as the username. Now, it would be a little bit better on the SSH side, because we are going to provide the SSH plug-in on the CLI. So user is not going to worry about that they have to find out the process good and index and all that stuff. IoT is going to be difficult. But there, now we have to see if there is any other easier way that we can achieve this kind of routing. And if there is an easier way, we should probably go with that and see how that plays out. And as it turns out, there is an easier way. We can do layer four routing. So when we were looking at these various options, Pivotal was also looking at routing SMTP traffic to its app, and it was considering the same particular option. So we collaborated, and we detailed this particular approach out. And this approach is pretty simple and straightforward. Here, what we are going to do is that we are going to map the external port of a router to a particular app. What that means, let's take an example. Let's say there was an app 2, and it wants to listen on port 5222. It wants to listen for SMTP traffic. So it maps itself to port 5222 of TCP router 1. And it gives out the front end IP of TCP router 1 and port 5222 to its client. And that's what it will use to connect and talk to app 2 over XMPP. Now let's say there was another app, app 4. What's it listen on port 5222? Now it has two options. It can either map itself to some other port on TCP router 1 and give out the front end IP of TCP router 1 and that non-standard port to its client so that it can talk to app 4. Or if it is not very tolerant or the clients are not very tolerant in terms of using non-standard ports, then it can map itself to another TCP router instance, which has a different front end IP, and map to the same port 5222. So the idea here is you use port address mapping or port address translation. And by virtue of receiving the traffic on that port, you would be able to know which app this traffic is meant for. And I promise we'll go more detailed and see what it actually takes in order to do all that. But at this time, I think it's good to take a look at what are its pros and cons. You are going to get support for almost all layer seven protocols from the word go. You don't have to write any proxies. You don't have to worry about coming up with some non-standard ways of developing routing mechanism for that proxies. Complexity is highly reduced, of course. On the flip side, you might have to use non-standard ports for your IoT protocols. The major concern here, though, is about scalability. Now there are two type of scalability things to be considered here. One is if you are tolerant towards non-standard ports, then you are limited to 64K theoretical number of apps that can receive non-HTTP traffic per router instance. But the workaround for that would be to have many more router instances in your deployment, and you can overcome that scalability issue. Other is more serious, as in if you are not tolerant towards non-standard ports, then you are limited by the number of front end IPs your IIS can give you. And there is really no good answer for that unless your clients are tolerant towards non-standard ports, in which case, I mean, as things evolve, we might go with a hybrid approach. But as of now, this is the one that we are going with. And as I promised, I'm going to dig a little deeper and take a little closer look at how we are going to develop support for this TCP routing in Cloud Foundry. If you guys have attended on-site stock in the morning, he indicated that TCP routing is going to be part of Diego. And what that means is that it's not going to be available for apps that are running on DEA. This will be developed based on Diego and will be part of Diego. So one more reason for you guys to adopt Diego. So this is the architectural block diagram of Diego. I'm not going to go into details of this, but I'm going to talk about the new components that we are going to add. There are two new components that will be added. First is the TCP route emitter. Now, this is a logical component, whether it becomes a part of existing route emitter or is a separate component is to be determined and will decide that as we execute. But functionally, what this TCP route emitter is going to do is that it's going to subscribe to server-side event stream from Receptor, which is going to let it know about changes in actual and desired LRPs. What that means is when app is placed or app gets moved or app gets scaled, we'll be notified about it. And then the route emitter is going to get that information and give it to TCP router, which is going to maintain the routing table, which is the mapping of front-end IP and port to your back-end IPs and ports. So as soon as the data is received, it will be able to route to the appropriate app instance. Now, if you'll notice here, I have added TCP router and TCP route emitter as part of lattice, because this will be packaged as part of lattice. And that is going to be our first deliverable. We'll develop these components, and we will ship it as part of lattice. So your lattice apps can receive non-HTTP traffic. So there are, obviously, a lot more things that are involved and that lot of things that we need to consider. And I'm going to talk about all of them. But by no means, this is an exhaustive list. I'm sure if Shannon is here, he is going to add a lot more. He's our PM for this project. Obviously, we are going to route TCP traffic, and I'm calling out TCP here because we are not going to support UDP at this time. If there is a need and there is a requirement, we will add UDP support in the future, but that's not on cause as of today. One of the important things that we are going to consider is zero downtime. Whenever there are config changes that are happening, when a new app gets placed, app gets scaled up or down, we have to make sure that the existing connections do not go down. Now, depending upon whether we are using HAProxy for TCP router, Switchboard, which is our existing TCP router that's used in services, or we write our own, that's one of the main things that we will be doing. We will make sure that there is zero downtime for existing connections. If you have more than one app instance, we will be providing you a way to do some load balancing, either round robin, weighted round robin. We will be providing a way for doing health checks. So we don't want to route the traffic to an app instance that's down or it's crashing. We want to make sure that we route it to an app instance that's up and running. We will be providing some kind of traffic shaping and limiting number of connections that can be made simultaneously. What is the connection rate? So those things can be controlled. We have to manage back and forth so that we don't run out of the back-end ports. And that could happen because of buggy TCP clients. It don't close the connection. The ports go into FinVet 2 state, and then we run out of ports. We have to manage rolling deploys so that we give enough grace time to the existing apps and clients to shut down their existing connections gracefully so that those could be upgraded. Obviously, we have to provide a way to reserve the front-end IPs and port combination. That's going to be achieved using changes to call Cloud Controller. And now Cloud Controller is going to make sure that there is no conflicts. If there are two apps trying to get front-end IP and port combination, it will detect that. It will prevent one of them. We'll provide a way for you to add, remove, and show these mappings. So that means CLI changes will be done. And you would also be able to do this mapping for front-end IP and port using application manifest. So I would like to conclude here by just calling for your feedback and comments from the community. And we would like to especially hear about your IoT use cases or any use cases that need non-HTTP traffic. And I put up my contact information and contact information of Shannon, who is the PM for this project. We would be glad to hear from you and we're looking forward to add this functionality to Cloud Foundry to make it a platform of choice for IoT. Thank you. We have four more minutes for questions or I have a demo, whatever you choose. Yes, so the way we authenticate devices is using client certificates. And since this is going to operate at layer four, we would be able to flow the client certificates all the way to the apps. And apps will be able to authenticate using the trust flow that they will have. When it, so we should start off next week. I don't know if Shannon is here. Yeah, Shannon is here, if you could stand. We would get started next week. And as soon as we have certain stability will, and it would be part of Cloud Foundry incubator, should be available for anyone and everyone to see. First up, it would be available as part of lattice for you to play and see how it goes. We have time. And this is against all the advice that I have got of not doing live demos, but I'm going to do it anyway. So what I'm doing is that I'm doing a Bosch SSH into one of the cells where I have already pushed an app, which is actually nothing but a Netcat server. It just listens on port three, four, five, six. And it just listens for anything and everything that you are going to send to it. Oh, sure. What I'm doing now is that I'm going to go into, I'm going to go into the container and I'm going to show you the process that's running. This is the app. Well, it's a Netcat server listening on some port. And it's, whatever it's getting, it's putting it into a file. It's putting it into a temp output. I'm just gonna, and I'm gonna hop up here and I'm going to open a connection. So as I said here, if you see this, first is the front-end IP and the port. And I'm gonna say Cloud Foundry Summit rocks. We received that here. So we will be able to, yeah. So we'll get this and obviously we'll, this is obviously a POC that I put up together for this stock. And this will all come together in a nice way which you can use. And it would be available to, I hope you give it a try, give your feedback and glad to hear from you. And I'm hoping we'll be able to get this functionality out as soon as possible. Thank you.