 Hi, I'm Chris Madison with Fermion. Today, I'm going to talk about building a Slack bot to connect Slack with ChatGBT. If you'd like to follow along with our example live today, you can access a GitHub with the app and all the steps right here. First question you might have though is why? Slack just announced native integration with ChatGBT. There's a wait list, but you'll get it eventually. Well, today's webinar isn't as much about what is the only way to connect these two technologies, but about how SPIN provides an ideal architecture to develop and run these and other types of applications. SPIN allows you to build applications in the language of your choice, eliminate spoiler plate, and combine some performance of always available architectures, such as containers or virtual machines, with the density and cost savings of serverless functions. How do we do that? By leveraging WebAssembly in the back end. You can think of SPIN as being powered by WASM. WebAssembly provides a highly performant, natively secure environment, which supports most popular languages and can be compiled into byte code that can be run anywhere across X86, ARM, Windows, Linux, back, etc. In combination with the byte code alliance, we've helped to develop WebAssembly systems interface or WASI, which extends WebAssembly from its initial design of client side usage to support back end systems. On top of that, we've developed SPIN and the SPIN SDKs to manage WebAssembly aspects for you, handle the complexities around things like HTTP and SSL termination, and let you focus on just writing the code that controls what your function does. So what's this look like in our example today? Well, first, we need to create a Slack app. There are multiple ways apps can interface with Slack, and the one we're going to use today are slash commands. This lets you send a command to an app, have it process the command, and return a result in any channel or direct message. With our app, we need to find our command. In this instance, slash chat GPT. And the endpoint to which we'll be calling out to. SPIN app. We also need to authenticate that request that's coming from the appropriate Slack app. Slack provides multiple ways to do this, but today we'll use a simplest, a Slack token. When a user types slash chat GPT and then some question in a channel, the app will forward the request to the specified endpoint. This is where our SPIN app comes into play. SPIN handles HTTP and SSL termination for you, handing our function a request, which includes the entire HTTP request. We then need to parse that to do a few things. First, authenticate the Slack token, and second, extract the query. Once we've done that, we build and forward the request to the OpenAI API. This requires another token to authenticate with OpenAI. Now, obviously, we don't want to store these tokens in our code, so where do we store them? SPIN provides a couple options, including passing it in as an environmental variable, configuring it in your SPIN.toml. In this instance, we're going to use a built-in key value storage. This provides a simple way for us to separate the secrets from the code. Once GPT is processed to our query, it returns a response, which we then simply extract the answer from and enter it into the body of the return for our function. Send that back to Slack. Slack will then enter the response into the channel. This may all seem simple enough, but where do we host this SPIN app, and how much overhead does that add? Now, we have several choices here as well. Locally, I can run SPIN up and to launch my application. Typically, this is reserved for testing, and in this instance, we really don't want to have our laptop open to the world and run this forever. We can also SPIN apps on Kubernetes via SHIM. This SHIM is now included with a preview feature in Azure AKS and with a preview version of Docker Desktop. Alternatively, you can install it into any cluster you are managing with either custom nodes or Daemon Set. In this instance, we're going for a third option, Fermion Cloud. Our goal is to build the easiest place to host SPIN applications, with the ability to go from blinking cursor in an empty directory to a deployed application in less than two minutes. With SPIN and Fermion Cloud, you too should be able to get a Slack GPT bot or other workload running in no time. Let's go take a look at the code and see this all in action. Great, so let's dig into it. If you're following along and you've not already installed SPIN, go get that installed. You can grab that from releases on our GitHub, find the latest version and the appropriate architecture for you. Once you've got that downloaded, just unpack it into your path somewhere. I can see I've got an unusual open SPIN. We run SPIN. SPIN is a pretty straightforward command line utility. We can see the help here. SPIN includes a bunch of templates to help us get started and make this very easy and fast. So I can do SPIN templates list and I see the various templates we have. See a lot of these say HTTP. Some of them say Redis and there's a couple other ones in here. HTTP and Redis are talking about specific handlers that we have. With HTTP, we're taking in an HTTP request. We're listing on an HTTP port. We're going to pass that request to the function that you've got, wait for you to turn responsible and return that. Redis, likewise, we're going to do this with Redis. Redirect is pretty self-explanatory. And then static file server, if you just want to host some static files, you don't have to actually go write your own handler for how to do that. But to return those, we have already created one for you. So you could just use ours and configure it appropriately to point wherever your files are. So let's go create a simple one here. Oh, I should mention we actually run the entirety of the Fermion website on SPIN using a project called Baltholomium. A little more complicated than static file server. But you can host any kind of application with complexity of that easily to SPIN. So let's do some new applications. SPIN, new HTTP rust, hello, Webinar world here. And then we created this new URL CD into there. First of all, we want to look at is the SPIN.toml. This is an every SPIN application and is a configuration for SPIN. We can see first the trigger and the base here. So we have to have some kind of trigger. That's what I know the SPIN app is being triggered. Again, we're using HTTP in this configuration, and we're going to trigger on anything off of the base URL. If you hit the URL that's listening on it. And then each SPIN app have multiple components. This one happens to only have one, but we'll see an example later that has more than one. You can see first here the WebAssembly, right? You have to show us where the WebAssembly is. And then we have a loud host. WebAssembly by default doesn't allow you to do anything. So or at least externally. So we have to go give it access. We need to go talk to something. In this instance, it's hello world. We don't need to. Then we have the trigger route. Just like the base trigger here. I can go to find what this is going to trigger on. In this instance, it's going to trigger on anything. Go with a triple dot, but I could have a one for like slash API slash static multiple components that are triggered on different routes. And then finally, if you're not using a rebuilt WebAssembly that we provided, like set a file server, you need to tell SPIN how to go build that. And so we've in each of these templates pre-populated, whatever the command is for that particular language to handle the build. And so let's actually go look at the code itself. So first thing to pay attention to SPIN SDK. This is again, one of our big advantages that makes us so much more simple is we handle all of this stuff outside of getting like handing you the request and you just build any response. So that really allows you to come down here and write just your function that takes some requests, returns a response. This one's super simple. It's just a hello world example. Or in this instance, hello webinar. So save that file. And now I can do it's been built. How long this is going to take is depend on the language and complexity or application. Rust has to take longer to compile because you get better performance. Python's pretty much immediately here because it's just added the interpreter. So now we can go SPIN up. And we can run this thing locally. And I'd see, hello webinar. We're running. Great. Now I can go about putting this into these Furman Cloud and hosting this on the internet. So I can do SPIN login. Which allows us to log in to the Furman Cloud. And I put in this key and I authorized. And now my command line is authorizer deploy from cloud. And now I can do SPIN deploy. And we wait a few seconds here and then this thing should be ready. And there we go. And so that's the route. And we can see hello webinar. We're up and running. Cool. All right. So let's go talk about the more complicated example, the reason we're here today in the first place. So we've got the Slack chat GPT. Again, previously showed you the URL to get to that. You didn't have it before. It is github.com slash risk management slash Slack GPT. I also provide a link at the end of this to that and all of the other calls to action so that you can get back to this if you need to. So let's go look through this application. First, again, let's start at the spin.com a little bit more complicated here. First thing you're notice is different. I've got these variables defined. I mentioned in the light board section that we're going to be using KV when we actually deploy this out of the cloud. But we show this application supports as well using configuration here and then passing this in as environmental variables to configure these, which we'll actually show here in a second. And then I have my first component here. The set of using rust like we were in the last one. We're using some JavaScript here to build this application. So you see this is the actual application that gets compiled and then allowed hosts because this thing needs to reach out somewhere. I have to tell it that it has access to reach out and where that is. In this instance, it's the AI for open AI API. And then because I want to use a key value store, I have to give it access to that. Just because I defined the variable up top doesn't mean that it has access at the component level. So I have to go provide it here in the configuration and then define where it's going to trigger off and then provide the appropriate command to go do the build. I also have a second component on here, which is our KV Explorer. I'll show you what that is here. We can see this is something pre-built. I'm just grabbing this off of Radoo, our CTO's GitHub and able to run the service and hand it access to the KV store as well. So let's look at the code itself. So again, now we're in JavaScript instead of in Rust, but you can type script instead of in Rust. We can kind of see some similar ideas here. So we're pulling in the spin SDK and that's where a lot of the spin magic comes from because I need to talk to open AI. I'm grabbing a library to handle that for me. And then again, I have very similar, I'm creating a function that's really just handling, it's taking in the request and the promise is a response. Cool. So now how do I go about turning my request into my response? First off, I need these two keys that we talked about. And so I'm going to go try to find those somehow. First, if I have that still configured to the default, their value of edge for in there, because I have not passed something as a configuration through the spin configuration, then I want to go open the KV and then we're going to go grab that from the KV. Otherwise, we will use the configured value if that figured value does exist. Same thing for Slack. After I've done that for Slack, remember the first thing I want to do with Slack is if that's the same thing I want to do with Slack, authenticate that there's actually my Slack app that's talking to us. I don't want to go put this out on the internet and then everyone's using my open AI key and then you charge my credit card of a billion dollars for all of their requests. So let's authenticate this appropriately to our right Slack app. So we're going to go grab the body out of the request. And out of that, I need to grab the token. And I'm going to validate that the token there matches the token that I have configured here. Next, we're going to configure the open AI so that we can open the connection. And then we're going to grab the text out of the body. If we don't have any text, if there was a request or something, I'm going to throw an error. And then finally, we're going to go create the chat here with the prompt that I have and send that whole thing here to open AI to ask it to respond to us. We're going to wait for that response, handle some errors if it doesn't actually work. And then once I get the response back, I'm going to return the response here and then ultimately return that response as a return for my whole function. Which sends it back to Slack, which will then put that into the channel. And then of course, do some basic error handling. Cool. So now what do I need to do to run this? Well, let's get into that directory here. Because we're now using Rust, or we're using TypeScript, JavaScript instead of Rust, I need a plugin for spin. We need to spin plugins list, and you see the plugins. I have JS2 Walls installed. If you don't have this installed already, we spin plugins, install JS2 Walls here. And make sure it's zero 4.0 and not a prior version. If you need to, I believe it's also grade. Yeah, you can run upgrade if you have to get to a newer version. And then I need NPM for this. So if I don't have NPM installed, I should go install NPM. I'll let you Google how to do that. If I do have NPM installed, I need to actually run an NPM installed to install things before I can then go run spin build. Cool. And so now we've created the WebAssembly. So let's spin that up locally. And so now, because this is an API, it's not super useful to go look at it right here. So it's throwing invalid Slack token. But I can use something like postband. To send a request to. Here again, I have invalid Slack token. Well, how am I going to configure the Slack token? I can go up here and I say export. And I can do spin config Slack token. So this matches what we had in the spin.toml for here. It's just an uppercase version of this with spin config in front of it. And let's call that Slack token boo. And now I can do spin up again. And I can come here and say token equals foo. And now we got further. We're not invalid, but a prompt is missing. So now I can say text and it's tell me a Kubernetes joke and send that. It's what you call it all day Kubernetes workshop and an event container. It's not that. So, hey, how did this work without the open AI key? Well, I cheated and I don't want to show you my open AI key. So I already pre-populated that. Same way I just did it with the Slack token is just spin config open AI key as an environmental variable. So cool. Now we can see that this thing is working. Let's go make it work in the cloud and then ultimately make it work in Slack. So if you've not done it, you're going to need an open AI key. So if you hopefully have an open AI account, create an account and go to the keys, create a new key. Don't give it to somebody else. Use that and populate that in here. I need to also create a Slack app. So I've started to create the app already. See, we're going to create new app, but I've got chat GPT. If you scroll down here, there is the verification token that's shown here. I'm not scrolling down because again, don't want to expose that, but you can go and make sure you grab that verification token because that's the Slack token we need. Then we're going to configure our slash commands. So create a new command. Go chat GPT and now what is a URL? Well, I don't have a public URL yet. I'm running this locally, but we're going to push this out to the firmware cloud. So I'm going to do spin deploy. Well, my local environmental variables that I've done won't push out to the cloud. So how do I manage those? Well, I do key value equal and a key value and then I do something like Slack token equals whatever it is and then key value and open AI key equals whatever it is. In this instance, I've already pushed it out there because again, don't want to expose it. So I'm just going to run spin deploy and let's go see what happens. Give this another little bit to get going. All right, and we're up. So let's copy this URL into our URL here and let's say it's sort of sort of slapbox, which queries chat GPT and usage it is just question here. So slash GPT slash question saves. Cool. Now, before we skip to the next step, we should be able to actually see here in my KB explore. So this is the other component that I have running. You can see that I've got these two keys already populated in here. You can actually use this to go populate them if you wanted, but ideally in a deployment step, you'd be using that slash slash KB command. And then going to hit stop, open and grab here. Clear my screen again. Let's make sure I'm not going to go right slash channel. And that can try slash chat GPT and tell me a spin joke. I don't know if that counts as a spin joke, but we can see now that it's working end to end. So hopefully this has been enlightening for you to see how we can use spin to create very quickly an application that ties together several things that previously, if you're trying to do this in something like Lambda, it would require a bunch of different AWS services to provide that end-to-end experience where it actually work and then more code to actually handle all of this. And then we're able to deploy that out and have that hosted in the cloud with Fermion Cloud very easily, very simply, get it running. As I mentioned, Fermion Cloud isn't your only hosting destination. You could run, spin up somewhere and have this just running. We've also seen a lot of people do this with Kubernetes. So we have a Kubernetes shim that allows you to just deploy this to a workload in Kubernetes. And then if you have an existing Kubernetes cluster, existing technology, this becomes just a library instead of, say, a full new vendor. So hopefully this has been helpful for all of you. If you want to learn anything more here, I wanted to put this slide here so you can, let's go boom. A bunch of follow-up information. Yeah, follow us on Twitter. Go use our quick start if you haven't done it to get good practice with spin. You can see more of this project. If you want to get this slide here and you're watching the video, I don't know what happened there. Now, if you want to get that slide, you can just grab it from end to slide show here. You can grab this QR code and grab this slide here. And it'll take you to all these links so you don't have to try to type them out. If you have any questions, though, I'm in our Discord. Feel free to reach out. We'd love to chat and answer any questions. And here, people's feedback and experiences with spin and with Fermia Cloud. So thank you so much. I hope you found this all really useful.