 Hello everyone. I'd like to welcome you to our next session, Hitchhacker's Guide to Application Connectivity. My name is Amber and it is my pleasure to introduce our speaker for today's session, Mark Cheshire, a Senior Director of Product Management here at Red Hat. A few logistics before we get started. If you have questions during the session, please submit them in the chat window and we will try to cover them at the end of the session. Or we will make it a point to follow up with you after the event. A recording of this in all the sessions today will be available after the event posted on the Red Hat Developer YouTube channel. We also encourage you to join us for the live chat during the break on the main stage for live dialogue with Red Haters. And with that, let me turn things over to Mark. Hi, thank you so much Amber. It's really great to be here and I hope many of you were in the keynote just now with the absolutely fantastic and inspirational presentation. What I'm here today is to talk about application connectivity. And when I look up at the stars and clearly looking up the stars is synonymous with Hitchhacker's Guide to the Galaxy. It inspires me to think of where things have come from. It takes me back to the time over 10 years ago back in 2008 when I joined this tiny startup in Barcelona, Spain, where I'm based. And that company was Threescale. And Threescale had a dream to make it easier to connect web services across the internet. We didn't have a hard term for what we were creating at that time, but it was essentially the first seeds of the market that we know today as API management. There was one other company at that time creating this part of the market called Mashery and many other companies have since joined. And as you know, Threescale is now part of Red Hat. The industry has evolved tremendously in these just over 10 years. And now it's an amazing time to be an API developer. There are so many rich tools and capabilities you have at your fingertips as an API developer, whether you're creating APIs or using APIs. And it's just a tremendous time to be able to take advantage of everything that's available out there. Now, although the world of APIs and API management is so fantastic, you'll notice that I don't talk about APIs in the title here. I talk about application connectivity and you may think, well, how are the two things connected? So I just want to let you know that it doesn't mean that APIs are going away, but we are on the cusp of a new area. Things are changing on the way in which we need to connect applications. And we've gone beyond the world where, although REST APIs are ubiquitous, you can't rely on a REST API to solve any every integration problem. And we need to think a lot more holistically. So that's the goal of my talk today to share a more holistic approach to thinking about how you connect applications. Of course, the inspiration for this don't panic comes from Douglas Adams. And it's in Hitchhiker's Guide to the Galaxy that we saw a fantastic approach on how to pull the best resources out there, pull innovation across the community. And the way that that was done was travelers all across the universe would pull together their tips when traveling and put that together in the form of the Hitchhiker's Guide to the Galaxy. If you realize that's actually the very first example of open source development. Now I work at Red Hat and Red Hat is synonymous with open source development. Everything we do is 100% open source. And that means all of these amazing community projects around the world of application connectivity. These are all projects where Red Hat commits engineering resources. We drive the innovation forward working together with the community. And every line of code that Red Hat creates, we contribute back to those communities. And in the spirit of Hitchhiker's Guide to the Galaxy, this is the best way to move innovation forward. With that, let's have a look at why application connectivity is important. There are several elements which drive an importance on application connectivity. First of all, businesses. When businesses run applications in different silos, they're not capturing the full values of those applications. The real value of applications is realized for the business when they connect those capabilities together and when they connect that data together. Now applications increasingly need to be able to be portable. It could be whether it's for resilience needs. You're trying to run applications across multiple clouds or multiple clusters. Or whether it's driven by data that you've got gigantic data models that are driving your AI models and machine learning capabilities. And in those cases, sometimes you want to move the logic close to where the data is. It's too expensive to move that data. So variety of reasons for application portability. And then when it comes to application and network concerns, traditionally companies have tended to take a bit of a siloed approach in this area as well. That the network team would look at purely at how do I solve network connectivity concerns and the application teams they would look at, well, how do I connect one application to another? Things are getting so complicated that it's important to take a more holistic approach and for these teams to work more closely together. Then the last area is that as we integrate applications, it becomes a lot more than REST APIs. So increasingly you're seeing asynchronous connections as a way to connect applications. These are becoming increasingly important. It's different types of protocol, whether it's GRPC, it could be GraphQL. So a whole variety of different ways that are important to connect applications and application connectivity has to embrace all of these. Let's have a look now at how to organize when you're in changing, driving any type of change. It's critical to look at the organization first. Now, probably the best example of a team in the history ever was the team of the heart of gold in Hitchhiker's Guide to the Galaxy. You had, say, for Louis Belbrox, the pilot, his trusty second-in-command trillion, and who can forget that very depressed robot, Marvin. What's important is that these people each had a very different role, and when you look at the different layers of the stack for application connectivity, you have the networking layer, the application layer, and the business layer, and you have different stakeholders responsible for each of these areas, and these stakeholders have to be able to collaborate closely together to make sure they're taking a holistic approach on how to connect applications. So that's a really critical first step. It's probably one of the biggest failure points I see with organizations as they're trying to do a better job on application connectivity. So what is a way in which... What types of tools could these different teams use to collaborate more effectively? That's what we'll go into next. Now, in order to be able to make a good evaluation of what options fit any organization, it's good to come up first with what are your criteria? How are you going to judge the different options and their appropriateness to fit the requirements you have? I grouped the assessment criteria in four categories. Accessibility, security, discoverability, and governability. Let's look very quickly at these four areas. So first of all, around accessibility. This can always be what type of connectivity you're looking for. Is it connecting external, making external connections across the organizational boundary, or is it internal access, which is typically known as East-West communication? Even inside the organization, you want to look at inter versus intra-domain traffic. The types of requests, they could be synchronous requests, request response, or it could be event driven. And you also have to consider the performance and resilience requirements. On security, lots of important improvements in security over the last decade. One of the key things is that sometimes companies think that they can deploy a simple integration endpoint and layer in security afterwards. I have to caution, I almost never see that works. Do you think about putting the security in place right at the very beginning rather than making that TBD later on in the lifecycle? And then on discoverability, here the world of application connectivity has so much to learn from the world of APIs and API management. It's critical that you have a good way to engage developers and developer portals have been a very big part in the tremendous take up and adoption of APIs. A key element of developer portals was that you had open API specifications as a standardized way to document APIs and to make it easy to use interactive documentation to try out API endpoints and explore and experiment. Those are also starting points, those schema registries for either doing human discovery or allowing machines to discover services that they can be using as part of their tasks. Then going on to governability. Here what's important is that the real world is very much different from the world of demos. When you're doing just a simple hello world demo, it's trivial. You can ignore all of these governability requirements, but the real world is not like that. You have to think ahead what the lifecycle is going to look like, how are you going to handle the different access requirements and make sure that you're tracking usage and feeding back insights and usage into how your services and application connectivity points evolve over time. We've seen the four types of criteria. Now let's have a look at the solution options. When it comes to solution options, if you think about cocktails, there is absolutely no better drink in the world or the universe than the Pangalactic Gargle Blaster. That's absolutely the best possible drink. When you look at the world of application connectivity, it's not so clear cut. It's a lot more blurred the lines between what's the right solution. We'll start with a quick whirlwind tour of the different types of options that you want to take into account. We'll start with network concerns, looking at ingress and egress. First, a very simple use case is for ingress, the situation where you've got an application running in a cluster and you want to give access to that application to some outside consumer and you do that by setting up an ingress router. On the other hand, you may have that application trying to make a request to an external service and that all goes through an egress router. Very simple use cases to start off. Then loading in a little bit of complexity that it could be for cost management reasons or resilience, you want to have the same application running on two different clusters and in this case you can put a load balance in front of those clusters and then just route traffic. It could be set up in an active passive or an active configuration to load balance traffic evenly depending on your requirements. Then you can notch up complexity one step further and have a look at, well, you've got multi cluster but how about ensuring that you have resilience against any single cloud provider failing? Here you can set up multiple cloud providers or it could be your own private data center in addition to a cloud provider. Then you have an external load balancer which first takes care of routing traffic across the different platforms you have and then within the platform, within each platform, you have the load balancer to route between clusters. So these are all approaches on effectively north-south traffic handling. Then let's have a look at service to service or east-west traffic, what options you have here. First of all, there's connecting a service within a cluster, so Kubernetes makes this super easy. You just connect a service name and that routes directly to the service running in a different pod or it could be multiple pods. You don't have to worry about host names or IP addresses or any of that stuff. So really great capabilities of Kubernetes. You may find that you want a service to access, connect with another service running on a different cluster. In this case, you have to go through the egress and ingress routers to be able to connect those two together. Very typically, this would be where a REST API would come in to use to expose that because you're exposing that traffic over the public internet for it to reach across those two clusters. Let's say you don't want to expose that traffic over the public internet, then you've got an option to be able to directly tunnel traffic across those clusters. You may find though that you traditional approaches of using VPNs are just too complex and too lengthy to get set up. So you may reach for some of the more modern approaches. It could be setting direct cluster integration between two clusters, multi-cluster service mesh. So that would be one approach. In the case that you don't want to join up these entire networks across clusters, another approach is to directly tunnel from one service endpoint to the other service endpoint. And there are brand new capabilities. There are a couple of sessions at this event today talking about SCUPA, the community project and Red Hat Service Interconnect, the brand new product based on SCUPA. That allows you to directly tunnel between any two service endpoints. One service one, C service two. Let's see if it's a local service within the same cluster and you don't have to worry about setting up VPNs and any of those things. So these are some of the options around east-west traffic. Let's have a look now at application and data concerns. So at the application level, first, the big question is, do I go with request response or REST APIs versus event-driven? And as I mentioned at the beginning, APIs are clearly ubiquitous. But when I look to Red Hat customers and when I see what's going on in the world at large in development, cloud native, clearly the growth is round event-driven. The world is increasingly real-time and businesses want to respond ever faster. Event-driven supports them on that journey. So that's the first big question to tackle when looking at integrating connecting applications. The next thing is to take advantage of enablers to do this integration. You can use integration patterns that takes reusable approaches for common scenarios that you see. You want to take advantage of out-of-the-box connectors. So, for example, to connect to a SaaS endpoint such as Salesforce CRM, make your life easier with connectors rather than hard-coding those fresh each time you come across a new endpoint. And then, thirdly, the service that you're connecting to may not expose data in the same format as your application is expecting. In that case, you'll want to transform the data through some kind of mediation broker and translate it into a form that you can consume more easily. So these are all integration enablers to help accelerate connectivity. Then let's look at a few more examples around the world of APIs. So starting with API gateways, these are synonymous with API management, and it's a key approach to be able to manage access to an API endpoint. So this is very well understood, probably some of the oldest technologies in this area in terms of maturity. Somewhat newer is the notion of service mesh, and we're seeing increasingly rapid adoption of service mesh on Kubernetes and, of course, probably one of the most popular approaches is based on Istio. And the key element of service mesh is having a sidecar as essentially the gateway into any application service. And these sidecars, rather than being centralized, as in the API gateway example, they're distributed to the extreme. So you have one sidecar for each application service that you're running. Now, the important thing to take into account while service mesh is typically used for east-west traffic and API gateways are typically used for north-south traffic, there's definitely an area where you have larger enterprises that are running hundreds or maybe thousands of microservices. In that case, their benefit from treating traffic between their internal organization boundaries, for example, between sales and finance, to treat that boundary as if it was a north-south boundary. So put API gateways, and that way you can establish a much stricter management of those service accesses across the organizational boundaries. When we look at all these technologies around the application layer and how to connect and route traffic at the application layer, what we see is that there's an increasing number of different routing capabilities. Red Hat is working together with partners and the community to bring these gateway technologies closer together and to make it easier to have the same common infrastructure in blocks that achieve different purposes depending on the user if it's a network user trying to handle network traffic routing or whether it's an application owner that's trying to manage the routing of API traffic. So these technologies, we're helping to bring these technologies together. Okay, so we've seen all the options. Now you're asking, what is the answer? How do I choose amongst these options? Well, there isn't one technology that meets the answer. What you need to do is to look at these as a matrix and understand how to evaluate the different choices in terms of the different pros and cons. Now, I'm not going to give you the full matrix here, but what I will do is to share how you can go about creating this matrix to meet your very own needs. So the first thing is to look at the criteria. So I gave you some guidance on how to structure the four categories around criteria for connectivity requirements, but it's up to you to decide what are the detailed requirements within each of these categories. Then the next step is to define the connectivity options. And so here you've seen an overview of the connectivity options. You may have others that you want to add into the list, but the good news is that Red Hat provides a full set of support for everything I discussed earlier. But it may be that in certain areas you want to use a Red Hat partner to fulfill some of those capabilities. And then based on those two dimensions, you can evaluate what's the best fit that meets your requirements. Okay, we've seen how to go about answering what is the best answer, but you may want to know what is the question if the answer is 42. So for that is really a takeaway, a call to action to use this as a starting point to build in a more structured approach on how to evaluate your connectivity options. And as you do so, I urge you to take two learnings into account. The first one is from the world of API management. Whatever you do as you're looking at application connectivity requirements to enforce very strict standards around use of schemas, it's going to help you so much on first of all establishing a common lingua franca, avoiding the need for solutions like Babelfish as Douglas Adams used in Hitchhiker's Guide to the Galaxy. And the second thing is to evolve developer portals that today developer portals are great for at rest APIs, but they need to evolve to become a unified integration hub for the different types of connectivity endpoints. The second area is to look at how to expand the details in this matrix, think for your own organization, how can I apply this and apply to make it relevant to my own organization. And particularly as you do that use this matrix as a way to improve the communication across different teams and avoid those teams working in independent silos. So that's my takeaway call to action for you from this talk. I hope it's been useful and thank you so much for all the fish. Do we have any questions? Awesome. Thanks so much, Mark. Let me go into the comments and just as a reminder, everyone, please put your any questions in the comments section of the Q&A and we will give you all a moment. Awesome. Well, if there's nothing else, thank you all for joining us today and we hope that you've enjoyed this session taking away some key valuable insights. As a reminder, this session and others will be made available soon on our Red Hat Developer YouTube channel and be sure to hang out here for our next presentation on transparent web platform decoupling with multiplying architecture. Thank you all. Thanks very much. Bye-bye.