 I've done some work on the code base just to get through some of the trivial stuff that isn't necessarily teaching us anything new, but will allow us to get to the meat of what we're trying to learn, which is how to use OpenAI APIs to do stuff with AI in our application. What I've done is set it up so that this interface now allows you to specify a podcast returns a list of matches for podcasts. So we'll go with the Joe Robion experience and then returns a list of episodes and I've only got it limited to returning two episodes just for instructional purposes. So we'll just select one here and then you can see it's logging the mp3 file for that episode. So if we were to open this up, that's where it would live. So that is our starting point. Okay, so now we're going to implement OpenAI's whisper API to transcribe the mp3 URLs that we're receiving from the podcast index API. So in their documentation under audio, they tell you how to integrate this and give you some sample code in both node or Python. And I think by default, a lot of their examples are in Python, but obviously we're using node and express on our backend. So we're going to go find their node documentation. There is. And this is the npm command to install the library. So we're running that. And we'll copy their usage code, just to get a basic hello world going for OpenAI's APIs. So I pasted that in there. And you can see you want this API key. And so you'll have to create one of those and put it in your environment variables file that dot env file. So I've got an account on OpenAI, maybe you do to go to this API section and view API keys. If you don't have one, it'll give you a chance to create one. So anyway, I know mine want to go set into that dot env file and paste it. And now we'll start up our server. Oh, give a snare. Oh, we can't use a weight just in the root like that. So we're going to put it in a function, test open API config, it's an async function. That allows us to use that away. We'll paste that bit in there. And then we got to call that function somewhere. So we could actually we're no longer using this slash API endpoint. So we'll just call it from there. Okay, we started up our server. Let's test that. Oh, we're gonna start our front end too. That's from that client folder. So both in the root and the client folder we run MPM start. And now we've got our front end. And let's see what's happening on the back end. Oh, yeah, we're not actually calling that endpoint yet. So in our app.js file, just our root file here, let's, let's run a test function, test fetch call. So we could just do a use effect hook for when this component loads. It won't have any dependencies. Let's get rid of that code. And then we'll just put a fetch command in here that tested API config. And we got to make this an async arrow function. And then let's see, I called this test open API, open AI config. So I got to rename that route. There we go. So that should call our function. Let's restart our server. It's refresh. See what happened on the back end. Incorrect API key. What happened there? Ah, the variable name is got an extra underscore. So there we go. Let's fix that started up, test again. And now we can see in the server response, we've got our hello world working for open AI. Okay, so now we know we're authenticating to the open AI API, and that we can do a basic completion from this code sample here. But what we want to do is be able to transcribe audio. So let's go back to their documentation on how to do that. And they've given us this code sample here. You can see a lot of it is just duplicate code. Where we're establishing connection open AI. But this bit here is how we create a transcription. And you can see they're using the file system module within node to create a stream of an audio file and MP3, and using whisper. And we're getting back a response, which is the transcript. So we're actually using MP3 URL. So we haven't downloaded that file locally. We'll deal with that in a second. But for now, let's see if we can actually just use the code as it's provided here and give it like a static MP3 file. So I'm going to copy this code where we have test open AI config. I'm gonna do this. So I just need an audio MP3 file. Here's one here. I'll put that into my project. So I'm just do that here in the root. We'll call it audio MP3. And then we'll log out that response. What did it look like response that text, maybe see if that's what comes back? Start my server. We've got to import FS another issue here. So let's do this. Let's, this is a lot of code to try and debug. And we don't have very good error handling in here. So let's revise a few things. One is we're going to add some better async code handling and error handling. So we'll put this in a then statement to expect a response. And if we get an error, we'll handle that in this catch section of the code here and log the error. The next thing we're going to do is I'm not sure that it's grabbing the audio file from the right spot. And node has this directory name variable within the path module that we're already importing. So I'm going to use that to make sure that it's going to this current directory and grabbing the audio MP3 file. And then we're going to use that in the code below. FS create reach stream to file path. Start this up and see what happens. Still got an error request body larger than max body length limit. I'm guessing that means our mp3 file is too large. And I believe they only allow a 25 megabyte file, but I can't remember where I read that. So I'm pretty sure that's what's happening. Let's try something here, which is to open this up in an audio editor and we'll just take a snippet of it. Hi, everyone. Welcome to the A6NC podcast. I'm Sonal. So we'll just do I don't know a few minutes here and we'll export this as an mp3 audio short. Let's copy that into our server. Call this audio long. This audio make sure it's under that megabyte file size. It is 2.6. And we'll worry about those finding what those limitations are for uploaded audio later, because I know we're already we're going to handle that in a different way if we have a long file, we're going to actually divide it up into chunks so that we can handle podcasts of varying lengths. So for now, let's do that. Let's start up our server again. And let's see what's happening. It hasn't thrown an error. And there we go. We got a transcription. So that was the problem. So there you go. We're using the open AI transcription API on an mp3 file. And the next step will be to be able to ask questions about the text that comes back. So we need to be able to sort of train our model on this data, get it familiar with what's coming back from our desired podcast so that we can say, hey, you know, in the A16Z podcast, what are the companies that are mentioned? For example,