 Hi everybody and welcome back to Open GovCon. So I'm excited to have Zack join us from Keyhole Software, and he's going to walk us through how to connect to an API in near real time with a lot of interesting security considerations. So I'll read you, Zack. Yeah, absolutely. Thank you very much, Kyle. You know, thank you again for setting up the whole Open GovCon. If not you, we likely would not be here. Some other people, I want to thank my favorite department in the entire US government is our Department of Defense. We owe a lot to these people. And honorable shout out to the one thing through this conference that has truly gotten us to where we are today, the 20 gallon coffee machine. I personally have emptied, I think, probably one of those by the end of this talk. I'm trying to get my registration fee just out of that. Yeah, yeah, you know, it's good. I think I did 40 ounces of coffee the first day here. So my wife wasn't too happy about that, but you know, what happens in Vancouver stays in Vancouver, as they say. I work for, as you can tell, Keyhole Software. I get to work with a lot of really interesting clients. I think I've done two projects with Centers for Medicare and Medicaid, two projects with the New York State Department of Health. Show of hands, anyone here done EDI files before? Good, good. It's like soap, but slightly worse. So I get to do a lot of different things and it's sometimes hard to talk through in a format like this what exactly we do on a day to day basis. So I'm gonna walk us through just some questions to get your mind moving to start to understand kind of the context that we've been around. Chief Architects, which means, you know, whenever something goes wrong, it's obviously my fault. I get to do healthcare, I get to do energy. We even have a project where we are delivering the Bible in an audio format to remote tribes in the Amazon rainforest. I mean, it just, it runs the gambit of every different thing. So this particular presentation came out of the project that I did for the New York State Department of Health. And to get started with, it's easiest to, I think first, look at who our end users are and kind of work through what their flow is. So if you are in the New York State area and you are a patient of the hospital that I work for, you have the opportunity to call in and to talk to a nurse and describe your symptoms in real time. Now these call center agents, the nurses that are actually picking up the phone call, they have an incredibly tough job. Within the first couple of seconds, they have to determine, is this person experiencing suicidal ideations? Do I need to call for an emergency ambulance to go pick these people up? Is this a life-threatening situation or is this maybe a parent with their first child and the kid has the sniffles and obviously the world is coming to an end but it's all actually fine and good. There's a lot of stuff they have to do. And when they receive that phone call, it's on a desktop application, a via agent for desktop in our case. Our application that we've been building for them year after year after year is a web-based application. So how do you get the information on that desktop application over to your web-based application? Well, if I were doing this in 2010, maybe 2011, I would have used a Java applet or I would have done Flash. But you know, thank you very much, Steve Jobs, may he rest in peace. Those aren't options anymore. So we had to get a little creative about it. But before I go into the technical, there's two non-technical and then two technical problems with what we were experiencing in our call center. The first was the fact that it's really, really hard for them to onboard new nurses. There's so many different workflows they have to go to depending upon which program the patient is calling in for, what their symptoms are. There's different checkboxes. There's just different dropdowns. There's different questions that they need to ask, which made it really inconsistent in the documentation that they were doing. They have to actually have a full QA analyst sit there and look at all these phone calls, make sure their documentation is in place. So for them, data consistency wasn't just an important thing. It's one of the most important things. And from a technical standpoint, obviously the desktop and the web application, they can't talk to each other directly, which is really hard to explain to a nurse why they can't talk. I mean, they're on the same laptop. Why should there be any divide between the two? But talking to a technical audience, you all probably know why. The other part of it, that was a technical consideration that we had to figure out how to get around, there's an API that we can call. Given a phone number, it will return us back who this person probably is. Now, if it's a cell phone number that called in, usually cell phones are linked to one person unless it's a family line. But if the incoming call is from a nursing home or a skilled nursing facility, well, all the residents of that skilled nursing home might be listed as that as their phone number. So, other than just the fact that it's an API call, this API was extremely inconsistent in terms of how quickly it returned data. Sometimes it returned data in maybe two or three seconds. Sometimes if the wind was blowing from the northwest and Saturn was in retrograde, it would take a minute. It would always give you back the information, but it wasn't something that we could rely upon in our critical path. So, how did we do it? And as you can tell, I'm a big fan of White Lotus Season One, which is why all of these slides have sort of a Hawaiian theme to them. How did we do it? The first step was, thankfully, a Viya agent for desktop had in its documentation two or three references to what they called a screen pop. And really, all that meant was as soon as a phone call was accepted or ended by an agent, they could make a call out to whatever API we wanted to. Now, this was something that we had to have on the actual agent's machine configured. So it's something, there's a little bit of operational challenge, but still nothing insurmountable. So we have call comes in, an API call goes out. But how do you route that data to that particular call center agent's web browser or maybe multiple tabs that they have for our application? One way that I could have done a long time ago and I probably have done it at some point in my career is when that API call gets accepted, it puts a record into a table and then you have your web application periodically polling to say, hey, is there any new information for this particular agent? There's some drawbacks to that because that is nowhere near real time. You could try to do that every five, 10, 20 seconds, but most of the time, you're not gonna get any relevant data and you've just wasted one of your six TCP pipes that the browser gives you. JSONP is an older technology. It allows you to have a constant stream of information going between the client and the browser, but it's very heavy and this signal essentially just has to sit there and wait for something relevant and it's very use case by use case. It's hard to share if I had another use case like this where I needed to send information to the client. It's very hard to share that and you have to work on flushing and not the greatest. So we looked at our options and signal R is where we landed, which it's a wrapper around web sockets. In a nutshell, a web socket allows you to pipe information between two clients. Think of it kind of like a peer-to-peer network where at any given time, one client can send information to the other and vice versa. That allowed us to be able to pipe the information that our API was getting to our call center agent's laptop. Now, I know what you're saying. Signal R, this is a Microsoft produced library. Isn't that crazy? I mean, it is 2023 after all. It is possible for Microsoft to have some sort of open source. If you would have gone in a time machine and told me 20 years ago that Microsoft would be producing open source libraries, I would tell you to find a better use case for your time machine than going and telling me about that. Once you go back in time and solve world hunger or do something else better with your time machine, but this is real. And I'm not gonna mention, this is not a talk about S-bombs, although it seems like pretty much, it's come up in every single discussion that we've had so far, but you're always going to, when you come up with a software solution, you're gonna have to rely on someone else's software to do it. There's a lot of vendors that you can go out there. Some kid in his basement, maybe he graduates from high school, maybe he stops deciding to support that library. At least with a Microsoft package, you're pretty much guaranteed that they're gonna continue to work on it until the sun burns out all of its fuel. I mean, they are, if nothing else, Microsoft should be known for that. There are other options, I will say. I mean, I'm not being paid by Microsoft to recommend their tools. If they wanna send me some Bitcoin, I'm happy to accept that, but this is very egalitarian advice. This is something that I myself have experienced and this is what I decided on. One of the factors was the fact that this is something that is actively used in terms of how many millions of downloads over the past two years. So this is something that has been battle tested. It's something that other people have gone through. They've figured out the wards on. I never want my clients to be the ones that get to figure out all the bad and all the pain points and any particular given piece of software. So to recap, SignalR sits in the middle. It receives the API call from the Avaya agent for desktop and it knows how to route it to that particular end user. It gives them the information about what phone number called in. It can even give them information about what campaign they called into, which would then allow us to bring up different workflows depending upon which one they fall into, pre-select some fields, make the agent's life just a little bit better than it would be otherwise. There still is the matter of pre-populating the patient demographics. Like I mentioned before, there is an API within the health system that we have that would accept a phone number and then return us all the demographics about the patient, which is really important. You might wanna know things like next of kin. You might wanna know emergency contacts, especially if the patient is non-responsive and they use that phone to dial in. It might not even be the patient speaking. It might be someone who's on the other line trying to figure out who they should even call because they see this as a known phone number. But the problem with the API that we were using, the performance of it was just so unreliable. I didn't wanna put it in the critical path of my users. I don't want them to experience that pain, but we still had to use it. So one of the techniques that we do in healthcare, which I hope gets proliferated to a lot of other fields, is we separate out all of our external integrations into a completely separate deployable, you can think of it like a microservice. The pattern is called an integration engine and there's tons of different integration engines out there. You can roll your own if you want to. I mean, it's a very cathartic experience. You can use something that's already out there that supports all these interoperability standards. No matter what you choose, just the fact that you have bifurcated your code from having your core application separate than your integrations allows you to do some pretty powerful things. One of which is when those APIs go down, your core application doesn't suffer from a performance standpoint. Now of course, anything that tried to call those APIs, that will also be down, but you're not doing nefarious things like holding up threads, especially if you're not using async await. You're not just leaving a thread out there to do essentially no meaningful work, but you're also setting yourself up for success because those APIs that you call, they will change. The specifications for those will change and they will do it without warning. I mean, I see people nodding their heads. You know what it's like when you're trying to use third-party APIs even within the same company, they don't have the same priorities that you do. They might not even give you a friendly heads up when they're planning on a change. Are they able to just change in production? Your API, your logging will start going off. The alarms will start singing and you have to figure out what's going on. There have been more than once that I can think in my head where I've been glad that I had a separate integration engine because I could keep the core of my application online. I didn't even have to send any downtime notices. I could upgrade my integration engine as these things were changing and then all of a sudden, things just start working. So you're treating all of your external API calls almost like a progressive web app to where it's not something that is core to your application. It's just sort of a nice to have, which in this situation paid dividends because for maybe 95% of the time, it was within one to five seconds, but all the other times, we at least didn't set ourselves up for failure. So what would happen is that the phone number would get broadcast out saying, hey, this patient has called in. We could have the integration engine suck in that event, do the API call out to that third party, get the results, broadcast it back to that agent. And sometimes it would be within a few seconds and they would be able to automatically see, oh, this is the patient, they've called in before, I know who this is, I can pre-populate the demographics. As you'll imagine, human names, especially when you're trying to transcribe them from over the phone, it's easy to mistake a character or two. Or in my case, my name is Zach, phonetically, there's like 15 different ways of spelling it and how are you gonna guarantee that that person on the other end is spelling it correctly or that it has been put into the hospital records correctly. This probably doesn't come as any surprise to anyone in the audience, but healthcare data is identified by your name, date of birth, gender, and then also they try to do matching based on phone number, on email address, on postal address. All that stuff can skew one way or another, and so it's not a deterministic match, it's always a probabilistic match. So we can show the user, probability-wise, who this person is calling in, but the only way we could really reliably do that is by piping that phone number through SignalR, giving it to our application, and helping these call center users not have to sit there and type in useless data. These are people that are highly trained. We need to have better empathy for our users. They didn't go to school to use web applications. I went to school to learn how to develop them, but they went to school to learn how to help other human beings. So the more that we can do to help them in their quest of providing better patient outcomes, honestly, the better it is for everyone. So now all of that said, click. Ah, there we go. Ah, man, it's ruined up. Nope. Oh, I turned on transcriptions. That's what it was. Subtitles. I don't know how to turn this off. Yeah, whatever. Yeah, it is what it is. It wouldn't be in the first time that, you know, I've messed up some. This is why I don't do live coding, by the way, during presentations, because I always end up screwing something up. Oh, I think I see it, baller. So there were definitely pros and cons to how we did this. Externalizing our integrations allowed us to keep that at a separate pace and a separate clip than the rest of our application, which has paid dividends more than once. It does, full disclosure, increase the complexity of having to develop something like this. Because you're thinking about it, something that used to be in one atomic solution, your application and your third party calls, you've split off into two different deployable units. In addition, you also have a SignalR hub that you need to keep up and keep it running. So developing it did get a little bit more complicated. In terms of technology, as well as mentally, when you're thinking about an API call coming in, it goes to your controller, it goes to your service layer, it goes to your data access layer. You can keep that in your head at one time. Like, that's no biggie. It's all in the same thread, it's all in the same server. Like, it's very easy to understand. But now that you have split out your processes, potentially running on multiple different machines, going from an Avaya agent for desktop to a SignalR hub, it's more complicated to think through. So full disclosure, it's not the easiest thing in the world, but it was a necessary level of complexity that we needed to add to fulfill the Department of Health requirements for how to target this particular patient population. And in SignalR, there's two different ways to target messages. One of them, you target to a specific user, which is good if you know who the user is. In our case, we had to do a crosswalk between what is there a Avaya agent for desktop, agent ID, and what is their domain ID, what they use to sign into our application. You can also send messages to a group. So maybe the group, if everyone here were to join some web application, we would call it group room 119. And a message would be broadcast to all of you. Then it's up to the recipient to know, do I care about this message? Now the fun part is, if Kyle, for instance, wanted to broadcast a message to a group that he is also on, it's like an echo. He would get that message back to him. So you have to take into account the fact that users can receive their own messages and that messages can come in at any time and can be about absolutely anything. When we're doing traditional web applications, when we're doing Ajax calls, you at least have a fighting chance of knowing when I make that Ajax call and I get the response, as long as within a reasonable amount of time, I know where the user is at in the application. Like, it's not a surprise to me, but with SignalR, you can get a message about anything at any given time and you have to structure and architect your application to be able to handle that. Now, that is a story probably best left for a happy hour because there's a lot of things that you need to think through. There's a lot of complexities to it, but all that being said, all those caveats in place, using SignalR to be able to broadcast messages from our server to our user, being able to do these lookups in the back end, being able to have an integration engine that's separated off our core of our application from the calling of those APIs, that was really the only way that we were able to do it feasibly. There's probably some other options out there. You all probably have different ways that you would solve it, but I get the privilege of learning from all the different things that you all are doing. I get the privilege of learning things from a lot of our clients are doing and no matter who I talk to, at least some of what I said in this presentation has rang true for at least some of our clients. So I'm excited to learn from you all in this question and answer section how you would have done it or any other caveats or any other odd stories that you want me to tell. I am perfectly capable of telling. And if you want to, I got a branded QR code up there, if you couldn't tell from keyhole software. If you wanna send me any feedback that you're not comfortable sharing because I'm tall, I'm kind of scary looking. Totally fine, I get it. I can be very intimidating at times. But again, thank you to Kyle, thank you to the Linux Foundation. It's a beautiful city. I personally have enjoyed walking around. It's fun seeing the cruise ships outside, seeing the workers outside. Those of you that are watching online, it's an amazing opportunity to be able to talk to real people that can pass the touring test because they're actual caporial beings. And it's just nice to be able to get out in public again and be able to talk to real people. So with that, I will take a drink of water and open it up for questions. Cool, so I'm from the DoD. So I'm CTO at Platform One. My name is Camden. And a lot of what you said did ring true for me. So we have a lot of legacy systems that we just, we're never gonna turn it off probably. And they may or may not present APIs that maybe a minute would be too fast for some of them. So I guess you think this would scale out to maybe connecting to a bunch of different or a handful of different legacy systems and then presenting a modern UI to the system to kind of hide from them the fact that this data is actually locked up in COBOL somewhere. Yeah, and if you can find any COBOL developers or if you are a COBOL developer, like make it rain while you can because eventually they're all gonna go away. I don't know when. That's actually probably one of the more common asks that we get from our clients. Digital transformation, which I'm surprised that there wasn't like more talks around here. Single pane of glass is the term that we always hear thrown around. And there's a lot of different ways to implement it. I think that a web socket is probably like one of the best ways to do it, especially when you have web-based clients that need that information. If nothing else, just using the integration engine pattern alone is more than enough. Most integration engines though are headless. They would exist either as a Windows service or just somewhere running in the background. So you still need some way of getting that information to the client. SignalR by itself doesn't have an authentication mechanism turned on so you have to enable it. There's a couple different hoops to be able to get to that depending upon which authentication scheme you want. But we were able to configure it to accept the same kind of auth using managed identity as our users do when they're going through their OAuth flow. If you wanna scan the code, it'll get me your email address. There's an e-book that we actually published about how we set all this stuff up. Or you can take the keys. I don't know if you all have seen the keys that have been attached to my business card around, but spoiler alert, it's me. So hopefully you all have been just a little bit curious and you just picked one up because it seemed like fun. There's handwritten notes on all of them and my wife told me which ones I could and couldn't bring. So some of them were a little crass, that's okay. No, good question. So you came to the open source summit for a particular reason that showcased the need and the challenges you're facing. What's your ask to that open source community? I guess one would be, it's okay. We can all kinda come out of the shadows and say it is okay to rely upon Microsoft for open source. The next is to, as people that use open source, think about some of the different architectural patterns that you can put in place. The integration engine being one of those. The other part of it is this is the open Gov contract and my biz dev guys would kill me if I was remiss in admitting, we just got GSA contract vehicle approved, like what's going on? We can be a prime vendor now. We've done work with CMS, we've done work with New York State Department of Health. The USDA is one of our other, they're big fanboys of Keyhole software, fanboys and girls. I should also say, I have two daughters so I try to use inclusive language. So it's just the other part that I honestly get out of these kind of summits is just hearing what other people are doing, figuring out are there different architectural patterns that I need to be implementing? What are my S-bomb elevator pitches that I need to make to people? That's sort of been the big one, this topic. And then just going and just hearing about what else the community is up to. So those are my 17 oddish things. Hey, I'm just kind of curious if you considered any other open source solutions besides SignalR when you were kind of doing research and setting up your WebSocket connections. Yeah, one of the things that I actually recommend everyone do is go to chat GPT, ask it for four or five recommendations for how to solve a particular problem, feed those recommendations into something like NuGet Trends or NPM Trends and look at how many downloads a lot of those have. Because if you see something where it's like absolute flat line, probably not gonna be something that you wanna use. There were two or three other contenders from what I remember of what we could have used. WebSockets alone, it's only the transportation technology. You still need some sort of hub to be able to understand how to route a message to a user or a given set of users. So no matter what, we would have had to have some sort of server, some sort of hub to be able to do that. The hospital I'm on, I'm at, is on Azure. We were starting to grow into GCP a little bit, but the managed service offering of the SignalR Hub allowed us to scale out potentially to 10,000 users. I mean, we don't have that many call center agents, so I don't think that would have ever happened. But it allowed us to have a managed service so that our total cost of ownership was decreased because let me be honest with you all, I like taking vacations every once in a while. Like I don't quite consider this a vacation, but it's like the closest that I get as a chief architect and not having to babysit a service that I myself stand up and I can just kind of like throw money at Microsoft for them to take care of the problem. That was a pretty compelling argument to me. So yeah, I mean, there's plenty of other options, but some of it you gotta get your hands dirty. And this was nice just to get a bunch of code out there and it works and I can have it in production and go on vacation every once in a while, not have to worry about it. So that's the other benefit too of going with a vendor like Microsoft. For all the things they don't do well, they do two things well, they keep managed services up and then they'll rename things after you started using them for a few years. They're really good at that. So no, any more questions or anything? Find me out in the hallway, but thank you all very much for attending. So thanks again, Kyle. Thank you.