 So I'm Chris, I also work on the Istio project. Successful, this will really be a matter of, do I open up more GitHub issues? That would be successful. Understanding workloads that people are trying to use Istio with and whether they are not working or are working well. So it's not so much Shannon and I talking, it's really encouraging everyone else to talk to some degree. I also have a etherpad setup at that link right there, IBM.biz, forward slash Istio BOF. If you have questions that you don't want to say out loud, you could type them in there and then I will notice that that's happening and answer the questions that way. So at the end of this, I hope to have more GitHub issues to open or just use cases that aren't working for you. And I plan to publish a blog post to kind of summarize whatever we discuss here. And if we don't get any questions, we can either have a nap or we can try to make up some questions. So if anyone wants to start with something, maybe a litmus test, who has never heard of Istio at all? If you want to raise your hand, you have not another person. Okay. So a quick overview of Istio. There's a few talks Shannon and Aaron gave a talk earlier. Essentially what Istio is trying to do is it's pushing a lot of the application things that you want to tweak out of libraries like Fnagle or the Netflix OSS stack into a centralized proxy that lives alongside of your application. So if you're trying to retry a request from your application to another application, you can figure some YAML or you type out YAML set requests and then that'll get sent to the proxy and the proxy will act on your behalf to retry those things. That's just one feature. Another one is being able to have like a central point whereby you're trying to test a certain feature out. You only want to expose it to certain users. So you're looking at a rule that is set for someone named Bob. And whenever Bob's request come in, he's kind of launched a new version of your application and he gets to play with that version of application while everyone else goes through first part. No problems. Yeah. So one thing that's come up in several of the discussions has been around how do I get started contributing this? Because sometimes people come in with a given feature that they like, given feature that they like, and they start out either with a poor request upfront or they'll open up an issue with kind of just a general how they want to go about it, but it gets too detailed and either it doesn't get looked at or someone will literally say this is too detailed and I just have a design proposal. So the common way, I know Shannon's been working on is starting with the GitHub issue, put a few details in there of something that you think might be broken, a change set that you would like, or just general questions about the use of this. Place that in the GitHub request and then expand it by creating a Google Doc or some other sort of shared collaboration tool to edit your design proposal and cross link that into the GitHub issue so that you provide a lot more context and get up issues can sort of get a little difficult to follow thread-wise. So something like a Google Doc will allow you to comment on the side and close things out. And then once kind of the general design is approved, we start working on it in the GitHub issue and then you can start with your PRs. And the PRs should ideally be small, just general small open source types of things. There's a limited set of reviewers actually on Istio, which it's always trying to grow, but that just ends up being the case. Like some people have a general overview of one component and we're slowly working to get people looking at that. But just immediately going with the pull requests probably just not gonna get the attention that you're really gonna want on it until you explain more reasoning behind the pull request. For those here other than IBM, are you willing and able or interested in contributing to Istio? Troy, I'm gonna give the mic to you. I don't actually completely know the complete problem space that Istio is supposed to solve. I just know from talking to Nino that it's on its way to replacing go router in Cloud Foundry or part of Envoy and Istio are replacing the routing layer in Cloud Foundry. And for me, I'm interested in knowing if that same service mesh and routing layer can be used to serve applications running in Cloud Foundry and applications running in Kubernetes or whether you wanna have that deployed separately. So that was one of the things I had and I wouldn't mind an overview if you have one you can share with us what the actual problem set that Istio is supposed to solve. Do you wanna take any of those or shall I? You can start with like the Cloud Foundry hybrid thing. Sure. So we're definitely thinking about use cases for interoperability between workloads on Cloud Foundry and Kubernetes clusters. And we imagine Istio playing a role in facilitating those use cases, both application connectivity and security policies. The three primary value ads of Istio as I understand it are security, traffic management and observability. So the ability to apply security policies across all the services in the mesh, whether those are apps on Cloud Foundry or services in Kubernetes or some data store that runs somewhere else. If it's got a proxy in front of it then it can be made part of the mesh. The same applies for traffic management and observability because Envoy was or has been built from the ground up to provide this observability data plane. It emits a tremendous amount of metrics and as a result can give various personas a view of the traffic in the mesh and a number of errors and a number of successes and as well as the security policy. What are your... What have you found so far in starting to integrate this with CFAR? Is it in a smooth thing or is it a... Are you anticipating big performance improvements? I keep hearing murmurs that that might be a thing too. Besides just expanded functionality do you expect other benefits? Yeah, I'm gonna hand it to Aaron because he gave a talk earlier about our experience working with the Istio community but in terms of performance we expect to see data plane improvements because Envoy's more performant than Go Router written in C++. Do you wanna tell Troy a bit about our experience collaborating with the Istio community? Sure, so I guess quick disclaimer we haven't necessarily been measuring performance as much from like a latency or throughput metric but we do expect that performance will become a focus if we ever do notice there to be issues or it doesn't live up to like Go Router's metrics. I know that the community itself does have some metrics that they've posted. I don't know necessarily what they are off the top of my head but if you were to probably dive through some of the either the email list or the like performance working group you might be able to get some of those numbers but we expect for the most part that it'll be comparable. Yeah, I think in your session before you had mentioned that at least the community version they're testing with about 10K containers at the moment. I was just looking around for Surya from the performance working group he was just here. Yeah, he was in the back and then he must have left early to get some beer. But there is a dedicated working group within Istio focused on performance and scaling and they have a call you can join every week and they publish results and it's a collaboration with multiple companies who are building out the testing framework and adding things that they measure and identifying bottlenecks and prioritizing improvements. Yeah, and then performance on that side as well. One of the reasons why Envoy itself a component of Istio doesn't publish its performance metrics is because one, the performance isn't their first target audience. And two, when you start to use a lot of the advanced traffic shaping and network features of Istio, in some cases it's not comparable to compare what you had before performance wise versus what you have now because it might be faster in other ways, if you will. And that could be such that maybe before you were able to get a steady throughput at x-rates. But if you had a failure, that would kind of trash the entire statistic. So part of the features of Envoy is being able to steer away from bad problems to keep that constant SLA. So you can just measure like happy case both ways and get some performance things out of that. But some of the advanced features are really some of the things that could improve the performance in other ways if that makes sense. I thought you also had a question around how is your experience working with the integration with Istio and Cloud Foundry other than just performance. Did you want to know about the current state of the integration? Yeah, sure. So you may be aware that, well, six, eight months ago it seems like we put an Envoy sidecar in every container. And that's serving a limited but powerful use case at the moment. It's statically configured and leveraging the instance credentials generated by Diego to terminate TLS for ingress requests from Go Router. So the Go Router to container data is encrypted in flight and actually that was a secondary benefit. The reason that we did that work primarily was so that Go Router uses the identity in the certificate to guarantee that it's making a request to the right backend, which gives us consistency in the face of control plane failure where the routing table may be out of date. Erin, you want to talk about our collaboration? Sorry, did we address that already? All right. Who else is curious about Istio? Anyone currently working on applications that are not web-based? Non-HDP protocols? Please. So obviously TCP routing in Cloud Foundry is a kind of no prejudicial remark here. This is just, it seems a kind of an afterthought and maybe it's our implementation, but it's an awkward thing to use what kind of things can we do with the new framework. Again, I'm coming from the CFAR perspective but also for Kubernetes. What kind of things will that improve with just basic TCP connections or odd protocol or different protocols? Could you tell us more about how TCP routing in Cloud Foundry is currently awkward? It has to do with our implementation. So we are running this in a containerized environment. And because of some limitations in Helm and how the way we deploy it, we have to actually pre-provision the number of ports we want to have open for TCP routing. We would like that to be a little more flexible. I'm already over my head with that question, so. Do you mean like a range of ports? Like this can listen on a range of ports? Yeah, so we have to actually specify a range of ports that will be open for TCP routing. It would be nice if that were dynamically configured or if there was just more flexibility to expose, magically expose applications on whatever protocol they happen to need. So the challenge with non-HTB protocols is that in many cases, especially when the client doesn't support SNI, you can't make a host-based routing decision. So the routing decision needs to be based on a port and the platform routers, whatever they are, Go Router or TCP Router, are for horizontal scalability, very likely not internet-facing. So you want a load balancer in front of those. So without provisioning load balancers for each route, which I would love to do, but isn't possible on some infrastructures, where for example, your F5 is your infrastructure load balancer, I don't know of another way around provisioning some opening some range of ports on the load balancer. I'm kind of familiar with that because this has come up actually in Kubernetes as often as well, like you can't specify a range of ports. For example, if you, I've seen this most often in the telco space where you are operating, it's sort of a gateway, whereby everything can potentially route to like one instance, just because you have multiple things mapped to that thing. But it's not so much that each individual, like it only elicits on a few, it could listen on a huge range, just because that's what you were given as a provider of this. So instead of saying I need writing a thousand of lines of YAML to say each of the individual ports that need opened, you want to have that range. That was something that is not currently in the Kubernetes API, which while Istio doesn't have a hard line as far as obviously it's working with Cloud Foundry, so we are operating on multiple platforms, but the API is still based on the Kubernetes API. So if that doesn't exist in there, then we kind of have to extend it ourselves. So part of the issue was it needs to kind of get fixed in the Kubernetes API first before that works. I've also heard cases of where containers operating on a huge range of ports like that has had problems in the Docker ecosystem as well. Chris, would that still require, even if there was support for the range of ports as you described in Kubernetes and then as a result in Istio, when taking advantage of that require that the nodes in Kubernetes were exposed to directly exposed to clients, assuming you have a load balancing tier in front, then those ports still would need to be opened on the load balancer, and that's the primary challenge that we deal with, is how do you open the range of ports on the load balancer in on GCP or AWS or Azure or some other public cloud? You could theoretically provision a load balancer for each service, so they have the full range of ports. That would be ideal, but in on-prem infrastructures where I dare to say the majority of Cloud Foundry operators are running their platforms. That's not an option. Unless they're using an SDN, you could dynamically provision load balancers on NSX, for example. But if you wanna cover all use cases with a single solution, then you have to open that range of ports on a load balancer. Right. So just in the Kubernetes space, there are a few on-prem bare metal deployment implementations that do exist that feasibly could just offer this if there was a way to map that back into the Kubernetes infrastructure. So it feels like there's multiple things at play just for this one seemingly simple thing. Just give me an array. Don't give me a string. What's so hard? For that, I mean, if you're really interested in the status of that, I can point you to some GitHub issues to follow. Unfortunately, I'm not sure at the moment because I mean, Shannon's still right on the cloud provider front. Like if you are using a hosted thing, now every Kubernetes or Cloud Foundry hosts that offer some sort of cloud specific load balancer is gonna also need to support that. So you'll just general agreeance on things from all the cloud provider perspectives as well. What else you got, Troy? Anybody working with any IoT sorts of applications? Would any of you like to be able to run workloads on Cloud Foundry that require UDP protocols? Yes, please. What are you using UDP for? For anything, customers. So I'm not using it for anything. So I just have to provide a platform for customers that have, you know, whatever. A Minecraft server, I don't know. I often don't get the level of detail of, you know, this is a particular kind of application, but it's just a checklist item. It's like, we need UDP routing. You might know we had a UDP router in an older version of a Cloud Foundry that I worked on in a previous life. And yeah, that was a popular tick box with customers who were running a variety of applications, so. Excellent. Anyone else? So other than yourself as a provider, is anyone else, everyone else is mostly working with web-based services? Any gaming platforms or high-frequency trading platforms? Anyone operating those types of environments? Everyone's using web? Anyone using HTTP2 at the moment? Or is it developers clamoring for it? Yeah, IPv6 is always a good one too. This question is not all related to what you asked, so I'm from the Renault Nissan Mitsubishi Alliance, but we're not using for the IoT, so we have connected applications that are deployed on the Cloud Foundry. And right now we are actually investigating Kong, which you might have known as the API gateway. So the problems that we face is like, we need to rate limit the APIs. And right now there are some problems with the limitations of the Azure load balancer, which can be solved by the API gateway. So these are the two limitations that I actually see. So the question is, is it your overkill, or is Kong better? Sure. I have not personally used Kong, so I can't speak to it, but I can tell you that the exact use case is the reason that the weather company, which is kind of under IBM, but they were looking, they were using AWS at the time, being prior to being acquired by IBM, and they noticed the same issue with the AWS load balancer. They couldn't figure out how to retry or throttle requests. It was either all or nothing. So what they are currently using in production for a few of their services, they did that by just using an Istio gateway, similar to gateway, just being the overloaded term we seem to be using for some sort of an edge load balancer. But what they're able to see is, they have like a nice little graph in that kind of draws where all their traffic is going. I think they're using a Netflix open source product called VisRoll to view this. But during times of like a hurricane or whatnot, any number of reasons why everyone would be looking at weather.com, things would just fall over because they were spilling out and maybe round robinning to all of the nodes or else just generally that one was busy, but it kept being sent to it. So they have since moved to Envoy and they're seeing much better performance because they're able to choose the type of load balancing that's specific to that service. So for example, if you've ever had an issue with your service and people are calling your service or depending on it and they think that their requests aren't going in, they're gonna try again immediately, which if your service is under stress, definitely doesn't help. It's the thundering herd problem. What you wanna do is try to back off intelligently. So they're using that sort of functionality through Istio by stating, I know that the service can handle this much if I cross this threshold, open the circuit breaker and start routing to either healthy residences or start serving codes back until the service recovers. So you can customize per all of those applications that you have nested under there, different retries for each of them, different load balancing schemes. Maybe you have an application where round robin doesn't make sense because some queries could be very expensive with the difference between loading all of the entries in the database versus just one that's a different expense level depending on how long it takes to come back. So if you're round robin around of course and it's just your turn again, you're hitting this expensive query, you're immediately gonna get blocked down. So they did exactly that with their weather service and they've been having pretty good success for it with it and that's the only part actually that they have migrated so far to Istio they're working on kind of everything under that but their edge load balancer that they were using AWS for is working well as well as they're capturing a lot more metrics out of that than what they were able to gather from whatever currently AWS is using. This reminds me of a opinion I've been developing and given the capabilities, this goes back to Troy your question about what problems does Istio solve. Given the problems that it does solve and the capabilities of Istio, it looks really like an API gateway to me, a distributed API gateway. It's pluggable and even API gateway providers are developing a strategy to become engines or policy engines enforced by Istio and applied by the envoys. So when I talk to customers who say, well, I've got this strategy with API gateway provider X, Y, or Z. How does Istio fit in? There's a story there, but over time that story might become, well, Istio is your API gateway and you can bring your policy engine or many of them. So in the time check, we'll probably have a couple of minutes or I don't think there's any sessions after us. Does anyone have any other further things that they just like to talk about? In my mind, this has been successful because we learned a bit about, I wrote a few things down just generally in that Etherpad. If you have more questions throughout the week, that Etherpad will still be up. Just make sure you're following the conference code of conduct and not trying to mess with it in any sort of negative way. But it's still clear that Istio, many people still wondering on what's going to do for them. Contributing was one that I heard earlier. I didn't hear it here. As well as the API gateway, maybe that is something that we need to be talking more about. Is it a competitor or is it an API gateway if you want to think about that? Shandon, any other polls? I think it's a beer time. Okay. Well, thank you everyone for coming. Like I said, that link, IBM.biz will be up.