 All right, well good afternoon and welcome everybody. Thanks for hanging out late here, and this is the last session before the aquarium, right? That should be pretty fun, I'm looking forward to that. So my name is Randy Abernathy, this is choosing the right technology for your API, and we got a lot of fun stuff to take a look at. As I was just kind of mentioning a second ago, this is tutorials, there's a lot to hands on here, and this thing is packed. So I'm gonna go through stuff, you're probably gonna have to pick and choose which labs you wanna try, and things that you wanna experiment with. We're gonna cover a lot of material, and try to get to everything that's in the roster here as we move forward. I work with a company called RXM, we're cloud-native training and consulting shop, been involved with API kind of stuff for quite a while, really got enamored with Apache Thrift, probably a good 10 years ago, maybe longer, and have been involved with that project, but super big fan of technology in this space in general. And so what we're gonna do to kind of work our way through this, we're gonna talk about the big picture API background, kind of what do you need out of an API? What are some of the traits of a good API? Then we'll take a look at a bunch of representative technologies. There's other stuff that you might run across when it comes to building APIs either for the internet or microservices internally in the backend, but this is a pretty good lineup of the things that you might see. I would say probably cover in a good, somewhere between 50 and 90% of the technologies that you probably would actively run into in commercial use, at least presently. And there's some legacy stuff out there as well. But we'll start off with number one and work our way down as far as we get. And when the bell rings, we'll call it a day. But there's lots of fun stuff here. All of the slides are up on schedule, as I mentioned. There's the links on this page as well, and the labs are there as well, also on schedule, so you can pull them down either way. The one thing I will reiterate is that if you use that link, instead of just pulling it off a schedule, there's a markdown also in that bucket. So all you have to do is just change the PDF extension of the URL that comes up for the lab to .md, and that'll give you a markdown version of the lab. So you could open that up in your favorite editor and copy and paste, where if you copy multiple lines out of the PDF lab stuff, you're gonna get weird hidden characters in your editor, and it's not gonna work, so something to think about. All right, so let's jump in. We are on the clock. Short history of API tech. There are a lot of different API sort of initiatives that have happened over the years, but it's really kind of interesting. We still use RPC today, and it was probably one of the first API technologies that was really conceived. So 1980, Bruce J. Nelson sort of coined that term in early ARPANET, Xerox came out with a commercial product, probably shortly thereafter, and then SunRPC. In 1984, SunRPC was created. We are still using that dang stuff today. If you have any kind of NFS going on, or other types of things, SunRPC is under the covers there. Then we sort of got into this whole enamored with objects sort of scenario where everybody wanted to do everything with objects, and then we thought, hey, since objects are so cool inside these big monolithic pieces of software that we're running, maybe it would be neat if we accessed them remotely, and then we realized that really wasn't actually very neat at all. It didn't scale very good, and that kind of went the way of the dodo, and then the whole service oriented architecture thing came along, and we started thinking, hey, gee, breaking up bigger applications into services, granted maybe bigger ones than we use today is probably a good idea, architecturally, as computers become more kind of commodity clusters of systems, the internet and all that sort of stuff, and so, SOAP came along to help us there, and then we realized that maybe SOAP, trying to take the RPC idea and plant it on top of an HTTP based world didn't seem to work super great, and we moved on to REST, and so REST is probably the oldest of the technologies that's still in very, very heavy use today, and it stands the test of time because it was designed for the world that we are in, and it's one of the best API technologies that you could select for operating over the internet because it leverages the infrastructure of the worldwide web. Every company that has a reverse proxy or a proxy or whatever that's caching your stuff, you're getting that for free. They're paying for all that infrastructure and that memory and the caching and the things that make your application faster, not you, you're just hosting the back end piece, and so there's a lot of really wonderful things about building RESTful APIs. The next thing that you start seeing is people want to do this same kind of API sort of stuff in their back end where they're starting to have lots more services communicating with each other, and so this stuff needs to happen a lot faster, and so they need something that's not maybe as text-oriented, something that's binary, something that can serialize and deserialize really quickly, something that's maybe got a little bit more aggressive API contract kind of approach associated with it, and so we get protocol buffers. That's internal at Google, protocol buffers, and so shortly thereafter, in that era, Facebook would look for smart people from Google and hire them to get the technologies that they want to do. That's a pretty good idea. Hire a couple of guys and you get a Facebook thrift, and so thrift was developed as Facebook. Facebook open sourced it right away though, and it became Apache thrift. Shortly thereafter, protocol buffers got open sourced because they're like, hey, this looks a lot like, and next thing you know, protocol buffers is out, and then Apache Ivaro shows up shortly thereafter. We'll talk about that a little bit more when we introduce Kafka here at the end. WebSocket shows up to provide a way to sort of give us a better streaming experience in the worldwide web, and then shortly thereafter, we end up with some HTML5 stuff and server-scent events, and we then get into this cloud-native era. The whole idea of GRPC was really predicated on HTTP2, so those sort of came out hand in hand. HTTP2 shows up in 2015. GRPC comes out more or less depending on HTTP2. Now, could you implement it over something else? Sure, does anybody know? So it's really a modern RPC system that really leverages the benefits of HTTP2 and sort of needs those benefits too. It doesn't work on HTTP11. And then Google donates GRPC to the CNCF. GraphQL Foundation is established around 2012. Facebook created that, and then open sourced it 2015, established an open source foundation as a new home for it in 2019, and that's another sort of popular initiative. So you can kind of see the remote procedure call, object invocation, resource-oriented, types of approaches streaming, and then we're down here with the graph stuff and artificial intelligence where we're getting more into like a query kind of scenario where we wanna ask questions that are a little bit more open-ended and flexible. So the world's changing and there's a lot of stuff happening, but the idea here is for us to kind of take a look at the technologies that are out there and compare and contrast them a little bit. So there's a lot of things that you could say about APIs that are pros and cons and things like that, but these are some of the big bullet points that we think about as engineers when we're looking at a piece of technology for an API. The first thing is headers. We need to be able to communicate with the platform sort of out of band from the application communications. And so we have to have this sort of key value pair mechanism for doing things like telling the cash stuff what to do with stuff, doing things like providing credentials, authentication, and so on. The application developer really doesn't wanna mess with all that stuff. They just want that to work and they want their application to do its thing. And so the payload is the application piece and the headers take care of that stuff. If you don't have headers supported in your API technology, a lot of people are just gonna turn their nose up and look for something else because it's pretty fundamental these days for things to work right. Next, polyglot, the days of everything just works in C are over of course, and the days of everything just works in Java are over and we're in a world where there's all sorts of languages and many, many organizations use multiple languages all day long and we need to make sure that services can communicate. So they've gotta be polyglot. Robust interface definition language. Contracts are at the heart of all of this and when you're building big systems, these contracts that describe what this service is gonna do and the kinds of things that you can pass in and out of it are really, really important. They're foundational to the way that you architect your system. Last large project I worked on when we would hire a new engineer, we would give them our interface definition language and let them look at that without looking at any of the implementation and they would come up to speed on what was going on in our platform and understand it so much faster and so much better without the implementation details just looking at that IDL. Now, the IDL we were using back then was MSRPC kind of IDL and it had a lot of issues and one of the biggest issues was it didn't evolve. If you needed to change something, you broke the world. You had to recompile everything because there are all these UIDs flying around and oh, hey, that's not the right interface. There's one extra function in the, no, sorry, it's not the right interface. So with evolution, we need to have the ability to add parameters to functions that already exist. We need to have the ability to add functions to services. We need to have the ability to pass new things back that we weren't passing back before and clients who get stuff that they don't know what it is, they should just ignore it and that's kind of how we've gotten used to using JSON and it has propagated into all of the more popular modern interface technologies. Streaming is another pretty killer feature. When you're a client and you're communicating with a server, you can send it stuff whenever you want to. You can make a call whenever you like but the server's not allowed to respond to you in a lot of environments until you actually make a call to it so we have to do all sorts of weird monkey business like polling and whatnot and that's not very efficient and can cause problems. So streaming is a big piece of the puzzle and I always thought this was the killer feature of GRPC, the fact that you could make regular requests from the client but then you could also set up streams that would come back to you on an event-driven basis and that's magic right there and that's one of the reasons why GRPC is driving so much of, we're in the cloud open track here with Kubernetes using GRPC everywhere, Docker uses GRPC to talk to Container D and the list goes on. It's a really, really popular technology for this space and then broad adoption. Obviously you need to have support, good documentation, things like that and then speed is really important too for a lot of things. So what we'll do is we'll flip through a bunch of technologies and we'll take a look at how they rate in these things. These are my ratings and I'm just gonna give them a doesn't really address it, not bad or good kind of a rating and we'll kind of see how they compare. Now this is a pattern I just throw up super fast because it's the curiously recurring communications pattern as we call it at RXM and we see it a lot. It's curiously recurring, it's a lot like the CRTP pattern in C++ if you're an older geek you might recall that. But the idea here is that we see a lot of systems built like this where REST is used in the outside world because it's super well understood, anybody can digest it really fast and easy, they can understand your APIs quickly because it's universal, everybody's got good tooling for it and it leverages the infrastructure of the internet that's already out there that you just get for free. But when you go into the backend, you don't have web servers and proxies and reverse proxies and all this other stuff. You've got a network and a bunch of computers and so if you want the backend pieces to communicate really quick there may be other technologies you could look at and this is where our PC shows up. All the big guys, Google, Protobuf and Stubby which ultimately you get GRPC out of, Apache Thrift from the Facebook people, Twitter created Fnagle based on Thrift and there are lots of other RPC systems out there in use making the backend systems faster and also giving them very robust contractual APIs maybe a little bit more so than you might have in the wild west-y kind of REST world. And then there are also parts of systems that just need to be really decoupled where we need asynchronous kinds of interactions and queuing so that life spans can be different and performance requirements can be different. We need that impedance adjuster and that's where you see things like Kafka or Nats or stuff like that messaging and loose coupled systems and all that. But those have APIs too, make no mistake. You can't put a message in a thing and then have somebody else pull it out and not know what it is and have that be super valuable. There's gotta be some sort of schema or something like that associated with those things and all of these technologies are wired into this whole idea of a schema. Whether the schema has functions that you call or just messages doesn't really make a big difference. Last thing I would mention is that the cloud native approach and microservice oriented container package dynamically orchestrated as it used to be known has driven a lot of interest in APIs and people were building so many services that that becomes a real piece of the puzzle as well. All right, so with that said, I think we're ready to start jumping into some of the different example technologies. I'm already atrociously behind my little schedule here but I'm gonna try to speed up a little bit but still hit the important point. So HTTP, JSON, REST, obviously representational state transfer has some specific constraints associated with it. These are the six, right? Client server, that's pretty simple. The client calls the server, not the other way around. And so if the client calls the server and wants to set up a stream of data flowing back that's not necessarily REST but it is really useful and so that's exactly what GRPC brings to the table and some of these other technologies like server cent events and web socket. And so when people need to stream data, let's say you're building a trading application, I wanna buy IBM, great. That's a client server request but now how do I give you status and market data and stream stuff back to you, right? REST doesn't work very well that way. There's long polling and other weird stuff you can do, chunk, it's not really built for that. But it is stateless and I think this is one of the most monumental things that was delivered to the distributed computing world here. Hammering home the scalability that comes from not storing client state on the server. The client is quite capable of storing its own state. If you have a hundred clients, you've got a hundred little memories and CPUs to use for keeping track of their own off tokens and their own what page I'm on and all that stuff. That's their business, that's not your server's business. And if it starts becoming your server's business, your server has identity. And if your server has identity, it's not a microservice, it's not easy to scale and all sorts of weird problems happen and you start creating cache arrays and weird stuff that slows everything down and makes it really complicated. So REST, stateless services are really, that's something you can adopt anywhere and really get benefits from. Cacheable, right? This is the worldwide web, right? There's caches all over the place and if you can say, hey, somebody's getting something and let's say you're getting a product from a store, how often do you update your products, right? Let that thing live in the cache for two weeks or something and then at checkout, if the price has changed, you can tell them or something. There's lots of stuff around the cap theorem and thinking through how we can make things faster by relaxing some of our engineering tendencies to white knuckle everything into perfection, right? Speed is a trait. It is a valuable thing for customers and caches are really, really important. If you turned off all the caches on your laptop, you would not be able to stand to use it, right? Caches are indispensable in everything that we do. So caching, huge piece of the puzzle, layered system, that's those headers we were talking about, right? Talking to all the layers, whether you're talking to the cache or some sort of authentication component or what have you and then code on demand, this is obviously you can pull in JavaScript and that's not super relevant but then uniform interface, another really interesting design trait of some really restful APIs. This is the place where rest gets deep into the, you know, geeking out on the actual API technology, the essence of it, hypertext is the engine of application state and all that stuff but you know, the top things really bear a lot of value and even if you're just doing straight head, you know, JSON over HTTP, those top things are super useful. So what does rest bring to the table? Support for headers, it's super polyglot. It doesn't come with IDL, right? There's no implicit IDL but there are a lot of IDLs out there. We'll look at OpenAPI in a second and so you can pick one if you want to. Support for change, absolutely, super flexible. Streaming, not really. Broad adoption, the broadest. Speed over the internet might be one of the best things you could pick. It might faster than RPC in many cases, right? You can't cache RPC that drills all the way to the back end every time. And then finally, optionality. The ability to do queries, sorta, you know, but not really, not super designed for it. So how do we get a contract if we want to, right? It's sort of like for me, I'm not gonna use anything that doesn't have the ability to describe a robust contract between me and my internal customers or my external customers or whatever. Well, we have Raml API Blueprint, the OpenAPI initiative, which is what most of the stuff in our space here with Kubernetes and what have you as adopted. So it used to be Swagger. And so you can describe objects as you can see over there on the right. And then you can describe interfaces, RESTful style interfaces, different operations that you can perform on routes and stuff like that. And very, very, very powerful and very expressive. All right, so there is a lab here. This is the interface I'm guessing most. Everybody I was already pretty familiar with, so we're not gonna stop for this one, but it is in there. It's step one in the lab and it has you build a simple client, build a simple server, see how they work together. I think I actually use Curl too. And then it has you create an OpenAPI description of the API that you just built, generate the client stubs and then create a client that uses them. So, and a lot of the lab stuff, the lab stuff self-explanatory. You can do it on any system that has Docker installed because the only command you actually type on your laptop is Docker something, right? Everything else is in a container which you dispose of in the cleanup step at the end. So, very easy to do these labs at a later point in time. So Apache Thrift. Let's talk about Apache Thrift. A full disclosure, I already told you I'm on the PMC of Apache Thrift, so I kinda like Apache Thrift, I'm a big fan. What I love about Apache Thrift is the IDL. The language is so expressive, the IDL. I think it's the most expressive IDL. The reason that I would steer folks towards GRPC would be if you need some sort of streaming, I mean that's a killer feature. Thrift doesn't have any kind of real implementation of that that is anywhere like what GRPC has. However, the thing that I do love about Thrift is it has collections like list, map, set. All the programming languages we're gonna work with can directly implement those things, so why not, right? Instead of repeating something and creating sort of a lower level implementation, we have a higher level implementation, we can more directly represent certain types of data structures and I really like that a lot. You have constants, collections, services, exceptions, all of that sort of stuff can be defined in Apache Thrift IDL. And when you have RPC, one of the things I wanna talk about, this is just RPC in general, not Thrift specifically, if you have an RPC system and you have a monolith that you're trying to decompose, it's really nice to do it with RPC rather than REST because REST is resource oriented, right? You create resources and you define operations on them where if you've got packages and classes and things like that inside your monolith, they have functions that they call, right? And it's a lot easier to just to say, take these five functions this thing has and expose them over the internet and you can do it as simply as taking the code and dropping it into a pre-built server which all the RPC systems have RPC hosts that you can just use and then you describe the interface in IDL and then you generate the stubs. You generate the stubs on the server side for your server and you generate the stubs on the client side for whatever programming language the clients are using. And since they're polyglot, you know, Apache Thrift, your RPC would have you with Protoc you're gonna be able to generate stubs for pretty much anything you need. Anything that's commercially viable anyway. So another, you know, great use for RPC. One last thing I would say about RPC is it's fast. Is it faster over the internet? Maybe not, right? If you can get something out of the browser's cache instead of going all the way back to the backend to get it, that's gonna be faster and that's what REST will bring to you on the internet. But in a backend system where everything's just services talking to services over the wire, it's a lot faster. Here's an example of SOAP on the left. That's one million calls from a client to a server over local host on a given laptop. Then you have REST. These are all, stabilizing the technologies as much as possible, changing one at a time, right? So this is SOAP, Jack's WS in Java, both the server and the client in Java. Yeah, I gotta pick a language, right? They're all gonna be a little different, but you know, that's Java. Tomcat 7, HTTP XML with SOAP. You moved to REST and it's still Tomcat 7, HTTP, but it goes to JSON and you're using Jack's RS and it's quite a bit faster. One of the things about REST is, get requests typically don't even have a body. So there's no serialization, deserialization on the call only on the response, right? So that's pretty nice. When you move over to Apache Thrift, still using JSON, still using Tomcat 7, still using HTTP, it's half the time to run a million requests that SOAP is and quite a bit faster than REST. And here's the real rub, right? As soon as you get rid of that application server and you just run one of the RPC servers that comes with Thrift or GRPC or whatever your choice is, you're gonna be down in this zone, right? Orders of magnitude faster, not orders, orders of magnitude faster. And if when you get down to the compact protocol, the compact protocol in Apache Thrift was basically designed to use the exact same integer compression, fast serialization technique that ProtoBuff uses. So ProtoBuff is, the fastest you're gonna get GRPC is a little bit slower than Thrift over there because that's on raw TCP, right? And GRPC's always got HTTP too underneath it. But same general neighborhood, right? So this is what we're looking at. It's a big difference. And you think about network utilization, CPU, memory, latency adds up. When you have microservices, you're not calling one guy that does a bunch of, you know, stack pointer adjustments and then responds to you. You're making network requests five or six times before you get the answer to the client. I think Netflix said their average request was handled by five to seven microservices. So that's five to seven network calls, network calls. You know, you're talking about context switches and all this overhead, just to get that one request, that speed makes a big difference, right? Five to six of those or five to six of those, right? So performance, big driver for people looking at RPC, decomposing monoliths, another big driver. It's a lot closer, you know, kind of a technology match. So Apache Thrift, headers, yeah, more recent. You know, a lot of boots from the outside world. And finally, you know, the community got some stuff together. There's always a shortage of developers. So patches, you know, welcome. Cross-platform support, absolutely IDL. I think it's the best. Support for change, absolutely. You can add methods to interfaces without breaking old clients. You can add parameters to functions without breaking old clients and so on and so forth. Streaming, nope. Broad adoption, not bad. It was baked into a lot of things in the Apache ecosystem. You know, if you're an Apache project, you're supposed to pick other Apache projects unless there's a really good reason not to. People often find a really good reason to pick whatever they want, but you get the idea. Speed, Thrift, maybe the fastest, right? Because you can really get down and just use TCP and nothing else. And then optionality, not really, but you know, you can kinda fudge it if you needed to. All right, so then there's a lab on building an Apache Thrift client and server. You get a chance to play around with the API or IDL for Apache Thrift. And that brings us to WebSocket. Gonna cover this one pretty quick. I think a lot of people who have invested in WebSocket are still gonna stick with it, but a lot of people who are looking at what they should adopt are probably gonna look at server-scent events or something like that. So we'll mention that as sort of a comparison coming up. So the WebSocket API, basically, you use HTTP as sort of a bootstrap mechanism. So you get all the authentication, TLS, all that good stuff that you would get with the header-based kind of HTTP world, then you upgrade and you basically take that entire connection and convert it to WebSocket. The great thing is that you're sending frames back and forth on an event basis. The client and the server can send frames to the other party whenever they feel like it. So it's super nice from that standpoint. It's got a small amount of overhead. There's a frame header, so it's a little bit more than just raw TCP, but it gives you some great benefits, right? TLS, right? WSS is gonna give you a secure WebSocket connection with HTTPS under the covers at first, and so on and so forth. But at the end of the day, there's no headers anymore, right? Because you're now just sending frames back and forth. It is sort of polyglot. I mean, there's a decent number of languages, but it's a pretty much a JavaScript-y sort of thing because you're talking to a browser. There's no IDL. You have to decide what you're gonna put in there. So you could use something like JSON or Avro and use JSON schema or Avro or what have you. Evolution, again, it doesn't address those things. You have to bring other technology. Reactive events and streaming, sure, it's great for that. Broad adoption, pretty well-adopted in some spaces, very fast, and doesn't have any sort of specific addressing of queries. All right, so that's WebSocket. Really was relief for old HTTP systems that needed some sort of streaming to the client. So that next step in the lab is WebSocket stuff and we kind of elided that because just time, trying to fit all this into 90 minutes, much less 45 didn't work, so that one's removed. Server-send events is in the lab and it's because this is the choice that a lot of people make going forward. It is an easy way to send events to a client. The client can subscribe to these events and if you're using HTTP2, you can have multiple event streams coming over the same TCP connection because HTTP2 can multiplex regular REST, get put post stuff along with all your event streams and you can have multiple event streams and it's a single TCP connection. Now, you do have headline blocking problems at the TCP level, right? If you've got seven packets stacked up and packet number one didn't get delivered for some reason, you can't process the other ones until you get packet number one recent to you. So the works can get gummed up at the TCP level, though they no longer get gummed up at the HTTP level because HTTP2 fixes that. But if you move to HTTP3, which is something that's getting closer, then that solves that problem as well. So this is a really great answer. The one, there's a few weird things. In the browser, the messages, the payload, the body of your events are always strings. So if you wanted to stream like video or audio or something back to WebSockets, but for just about everything else, if you can get away with the JSON, then this is probably your friend. Great technology and the lab will walk you through setting up a really simple server and client example. So what does this bring to the table? Support for headers, it's HTTP based. Cross platform support, pretty much, but again, it's really Java scripty kind of centric. So the languages that commonly support and work with that are certainly supported and well supported and good documentation. No real IDL, again, if you look at open API, you'll see tickets that have been open for like six or seven years, like shouldn't we address server cent events? Yeah, we should, and then nobody does it. So you can figure it out yourself, but there's no direct support, at least that I've seen. Evolution, again, if you don't have IDL, how can you evolve? You don't even have a contract in the first place. So, or you could say then I can totally evolve, however I want, right? There's no contract. Reactive events, perfect for that. Broad adoption, pretty good, getting better, speed, pretty good, you're still HTTP, but pretty good. And then optionality, no specific relationship there. All right, so SSE versus web sockets, I think we called out the big issues here. SSE is better for almost all situations, I would say. It can be multiplexed over a single connection if you have HDB2, which is 40% of the traffic on the internet now, so getting there. And web socket has that one killer feature though, which is it can do binary. And it's a little bit lower overhead too, if you're really, really worried about that. All right, so that's lab four, building an event-driven API with server-side events. Let me see how we're doing here. Okay, yep, so I'm actually just gonna leave you guys with the labs and we'll press on, because we're getting close. And what we can do is if we have any time at the end, I'll let you guys work on lab stuff. I'll be happy to help, but again, the lab's really pretty well put together, pretty tight. I don't think you're gonna have a lot of questions, there's a lot of examples, and everything's kind of spelled out, and it's all Docker stuff, so it should be pretty straightforward. Okay, so GRPC, our next guy. GRPC defines an IDL, but really actually, it's the protocol buffers IDL. And protocol buffers gives you the ability to define messages. And so GRPC, they say, you read the documentation, GRPC is an RPC system, and it can use any serialization system you like, as long as it's protocol buffers in small print. You could, sure, you could maybe use something else. Nobody does, I've never seen it done, and I don't know why you'd do it, so we just, it's pretty much protocol buffers in GRPC. Now what's interesting is, protocol buffers is still a Google thing. A lot of rumbling about, hey, shouldn't this be given to the CNCF? It's pretty important, you know, and all that, but I think there's some copyright issues, or some licensing, or something like that. That's at least the things I hear. But GRPC was donated to the CNCF a while back, and obviously is used all over the place, but it does, in fact, at the moment, depend on protocol buffers. So protocol buffers was all about creating these messages, because if you've got a message, you have something you can send to a server, over any mechanism, right, over Kafka, or Nats, or an RPC, you know, request. Here's my function call, and then you get your result back in another message. And under the covers, this is exactly what Apache Thrift does. It takes the parameters that you list out and just turns them into a struct and sends them over, and then you can return a discrete element, but it just puts that in a struct and returns it. So you have the flexibility of returning multiple things, extending and evolving your APIs without breaking clients. Notice the optional there, right? If something is required, you've got to return it, no matter what, or your API's broken, but most things are listed as optional, specifically so you don't get stuck like that. And if you don't have this, maybe you sunset this feature, you just don't return it anymore, right? Everybody knows it was optional. You have to write code that respects the API contract. So that's a look at the protocol buffer side of it, and then the GRPC side of it is where we get support for RPC components. And so over here, you can see that we've got a Go program, and the G, nobody will admit what the G stands for, you can guess, right? It was invented at Google, but it's not Go, right? It generally was used for all of the internal languages at Google, so they had great libraries for C++, Java, Python, and Go and other languages too. If it's a commercially viable language, you can use it with GRPC. And you can see the GRPC server being set up here, and then we specify the protocol buffer server contract that we're gonna implement. We listen on some port, and then everything else is gonna get dispatched up here to the functions that we attached to that server struct there. So this is Go's way of kind of creating interfaces on objects. We don't have the classical system in Go, but every programming language is different, and that's one of the sort of interesting things about these IDL implementations. If you're using Go, things are gonna be kind of Go-like. If you're using Python, things are gonna be kind of Python-like, so you'll be familiar with it when you go see this IDL in another language, but there'll be some stuff that's different, right? JavaScript is pretty different, and so the implementations there can be a little bit orthogonal to what you might be used to in a traditional classical language like C++ or Java or something like that. But in any case, that's the GRPC piece of it. So GRPC is basically the RPC runtime that uses protobuf. And so with GRPC, you can see this is a pretty knockout lineup here. Support for transport data, headers, yeah, polyglot, absolutely IDL, strong, evolution, great streaming, it does have the ability to stream back to the client, which is used everywhere in Kubernetes, right? All of the sort of dependency inversion stuff breaks down when you can't stream data back, right? And so this is what enables it. And so when you have 1,000 cubelets, you don't want the API server connecting to 1,000 cubelets. You want the one cubelet to make its own connection. And if there's 10 of them, great, then they make their 10 connections. If there's 100, then great, they make their 100 connections. But then what happens when the API server needs to say, hey, you got a new pod? And it needs to say it right now. It doesn't want to wait for you to poll your next cycle. Well, you need streaming. And this is exactly what GRPC brings to the table and why it's so important in a lot of API architectures. And queries, again, with RPC, you can always do queries, but it's not particularly good at it. So that is GRPC. Then we got a little lab set up where you build a client and a server and test them out. And all this stuff is, you're running the server and the client and Docker containers and they talk over the Docker network and all that. It walks you through it. All right, so the last, the biggies is GraphQL. And so GraphQL is really interesting and it's not shocking at all that it came out of Facebook. Who else has this big entire pool of all of these nodes that have relationships with each other, right? And that when a message pops up here, you want to propagate it to first order, second order, and so on nodes. Well, there you have it, right? Facebook comes up with GraphQL. They want to make it possible for applications to get the data that they want. And so what was happening with a lot of our sort of nonqueryable APIs where we were building these very robust contracts, even though we made those contracts evolvable, they're still a little bit rigid, right? There's certain things you gotta pass, gotta get back. And so how about this? You build an interface and then you have one client that is running in a browser on a laptop and that thing's got 512 gigs of RAM or whatever it has. You can send it all the stuff, right? It has probably a broadband connection and then you switch to this guy, right? One interface does not rule them all, it turns out. Netflix and their early microservice journey, lots of other companies have had to say, you know what, we have to build a back end for the front end, right? Whatever it is, these guys need a different level of granularity than the guys that are running on the big powerful laptops with good connections. So we have to do different APIs or we have to build an API that's flexible that allows you to specify like how much you're gonna get returned or so on. But there's also just the real estate, right? In the phone, you might only want two columns but on the desktop, you might get all 10. It's a very, very complex problem to try to satisfy all these different environments and when you build a contract upfront, what happens when some really inventive person at your company comes up with a brand new application that nobody thought of before? But they're stymied because they can't make it work because the API contracts are so intractable that they can't solve their problem easily. Well, all of a sudden, you have something like GraphQL where you can say, look, I've got a schema. There's a bunch of queries you can make. They're parameterized and you can literally pick the stuff you want and in one round trip, you can tell me all of the stuff you want. I'll go and make the 40 back end calls to the CQRS pool of data that's over here that has five microservices worth of data that I can get from that guy but then this guy got a call directly and then I have to go and hit some other data store directly in my own self, pull all that data back together and give it to you. That way, the slow bandwidth connection makes one round trip and the fast ones make many. And this is really what the magic of GraphQL is. It gives the client the ability to get the data that they want. And when you extend that schema, it doesn't impact anybody else. It's a whole another level of evolution and it gives people the ability to articulate at a very, very fine grain scale the things that they want to receive back. And is it good for querying graph databases? Sure is, as it turns out, but it's also good for just querying anything. If you've got a big pool of microservices, you just exploded your state all over the place probably and it's gonna be a lot harder to collect it back and this is a great way to do it. So it gives you a lot of useful types of options and I would say that you don't see a lot of programming solutions for applications that are just GraphQL, but it's possible but you would probably be doing some tweaking, right? You'd be shoehorning some things into GraphQL that would probably be easier with GRPC or Thrift or something else. But it's the kind of thing where if you have that one oddball, maybe you stick it in there, right? But if you have a whole bunch of stuff, maybe these things are one API scheme and these things are GraphQL. And so what does it do well? Well, it can run over HTTP and usually does so it has header support, it's polyglot, it has IDL, but what's the IDL? The schema, right? It's all about, hey, this is the schema and you can query this schema and you can get back all sorts of stuff. It's a very different kind of contract with a schema-based contract and it's a lot more flexible in that way. But it is, there's an interface definition language, it's the GraphQL schema is the way that you define the contract. It's evolvable, you can add stuff to the schema and it doesn't break the old guys. It's got reactive events kind of support you can stream. So if somebody says I have a query and you give them all the stuff, new stuff comes in, you can stream it to them if you want to, which is sort of nice. It has pretty broad adoption, it's decently fast, although there's some interesting things because they try to provide you with an engine that does a lot of stuff and then you just build the fetches that go out and get the data and so sometimes that can be inefficient and you might need to overhaul some of that and do it a little bit differently to get it to be fast but the big news is that the expensive part now is one round trip and instead of returning what your contractual GRPC or REST API is always gonna send, you can whittle that down sometimes to 10% of the size of what it used to be and that right there is the win. So depends, right? And optionality of course, queries, this is the one that's king there, right? You have so much flexibility in just getting the things that you want. So the last thing that I would bring up would be if I have time, how am I doing? Okay, two minutes, I can do it. So the last thing I would bring up is messaging as an API. Thinking about an application programming interface is just that, right? I build my application and I program to these other systems and I give them stuff, they give me stuff and there's a contract there that makes it reliable and repeatable and scalable and all that sort of stuff. Well, if you take Apache Kafka and I use it because it's super popular and everybody uses it, right? It's a way for messages to flow throughout an organization and it's another great way to allow somebody you never met that you never knew about that got hired five years after you got hired who thought of something amazing that you never would have imagined can just create it because they have the data. If all that data is flowing through your topics, they can invent things that nobody else ever thought of and put them to work and create them and make them real without a lot of roadblocks because the data is there. But the data used to always be like, oh, I gotta go talk to this guy to figure out like what's that those two bytes in the middle of that thingy there? I don't, you know, now we have schema registry, right? Fairly recent invention in the Kafka space and you can use protocol buffers or you can use Avro and so this is Apache Avro. You specify the schema, Avro schema, yet another one, right? Protocol buffers, Thrift, Avro, they all do kind of the same thing but they're all a little bit different. This is an Avro schema and it says, hey, this is a user object. It's a record like a struck and these are the fields that it's got and those are the types that they have and so on. And so you could write some code to serialize and deserialize this stuff pretty easily. Well, Avro's superpower is that you don't need to know the schema in advance to deserialize something. You can literally get the schema out of the schema registry for the topic you're reading from and then you can use the Avro library and say, hey, here's the schema for this thing, decode it. Normally with protocol buffers or with Apache Thrift you compile the serializers in advance from the IDL. Does make it a lot faster, that's for sure and you can compile Avro too if you want to but it can also dynamically discover these schemas. So you have the ultimate evolution capability, right? You can push new schemas into the schema registry and when new types of objects start flowing through that topic, you can still decode them and that's pretty powerful. So that's the idea of the combination of an interface contract, Avro, but wiring it to the topics through the schema registry and the distribution mechanism that is Kafka. And so that's about all I can cram into 45 minutes. Points for these guys I'd say you still have headers, Kafka supports headers, right? They didn't but then they're like everybody's like, you can't exist without headers, we need headers. So they added headers. Polyglot, of course, IDL now with Avro. Evolution for sure with Avro. Streaming, it is. Support, huge speed, it's very fast and linearly scalable and okay it doesn't do queries. So another piece of the puzzle, right? And if you go back to that curiously recurring communications pattern and you think about all those different event horizons, right? There's the internet, there's the stuff you want to do interactively so you can respond immediately and there's the things you want to decouple and do asynchronously. There's technology here for all of those phases of existence and Lab 7 again covers some of that stuff. So yeah, that's a quick look, super quick look at some of the most popular API technologies and how they compare and contrast and what they do and a take-home lab I guess that you guys can have some fun with. So yeah, I think we're out of time. I'm okay if there's any questions. You're welcome to holler if you have any. Happy to do a sidebar too. Yeah, I don't have a good answer. Yeah, when would you use WebRTC? Yeah, yeah, the web is moving really fast these days and I don't know that there's an error apparent and there's a huge amount of inertia around some of the technologies that are in place already. So I don't know, do you guys have any thoughts? What are you looking at? Yeah, yeah, okay, yeah, I don't have a strong opinion on any next generation error apparent. There's a lot of neat stuff happening though and that's for sure. You got to keep your ear to the ground, yeah. All right, yeah. Well, thanks everybody. See you at the aquarium. Yeah.