 Well done. That means we can start with our next talk, migrating from Adobe Connect by Jess Portnoy. Thank you Jess. Hello everyone and thank you for joining me today. So today we'll be talking about good deeds. Specifically migrating from proprietary software on to freedom. This is my good deed of the season and before I start are there any Adobe fans in the crowd? Because you shan't like this session. If you do like Adobe, just warn me. Right, so look up the dictionary definition of Adobe. Essentially a brick. But what is it really? So it's a platform for virtual presentations and conferencing. I'll show you what it looks like just so you get the general revolting idea. So looks like that. Sadly we couldn't get customer consent to share the actual platform. So I'd have to show you some screenshots. Those who know me know I love to demo. There will be a live demo but in parts. So I had to be creative. I couldn't show you the whole thing. Sadly such is life. So it looks like that. So there's a presentation. They call them pods. I prefer the word widget but never mind. So we've got this pod. We've got one for chatting. We've got several others and they can appear or not appear depending on the presenter's desire. So this is what we're about to talk about. Back to our presentation. Right, so recently, as all good projects, this started late at night and our co-founder called me and she said, hey Jess, how are you doing? Well, I was fine until now. What's down? And she's like, no, it's nothing like that. We have a project we may need your help with. I'm like, okay, sure. What is it? And she said, well, speak to Jack. That's a solution engineer in our company. I work for cultural, by the way. So I've spoken to Jack and he says, listen, we have this customer and he uses Adobe Connect. Like, blind me. What's Adobe Connect? And he's given the pitch. I've just given you. And I said, all right. So I'm an optimist. I figured I'll use their API. I'll get the video file and then I'll migrate it onto our platform using our APIs. I'm an optimist. I was born this way. I keep trying to change. So, no, it's not that easy, apparently. So after doing some research, I've discovered that, yes, they do have an API of sorts, but you can't obtain one cohesive file representing the Adobe Connect session. There are multiple files and you have to assemble them. And there are naturally Adobe being Adobe, their FLVs, right, their flash files. And they're not all independently playable either. So I said, yay, fun. By that time it was about quarter to 12, so I gave it a rest for a few hours and I came back in the morning. Now, I'll walk you through what I've done, which is basically using leveraging open source software in order to migrate from what I've just described and all I've allowed to cast during my sessions. So, let's go with rubbish. So I've used FFMPEG, Selenium, with Mozilla's Gecko driver, OpenCV, and using all these tools, I've managed to produce video files representing these sessions and migrate from this platform to our own open source platform called Ultra. Now, I'll show you demos in parts because, like I said, sadly I can't show the whole thing, but we'll have fun. I promise, we will do. Okay, so to start with, we needed some metadata and for that we could use the API, so we have done. There are multiple clients that can be used to retrieve such data. I chose the Adobe Connect Ruby client. This is an actual link, so if you need that, you could use it. The reason why I chose Ruby, other than it being a very nice language, is that I've already had some code that does some Selenium work. Is anyone familiar with Selenium? Some people are. Good. It's very nice. So it's essentially a browser automation framework, so you need to perform actions within a browser and you don't want to do it with your hands. You can use Selenium. Very convenient bindings with lots of languages, not just Ruby. I just happened to have hard some code written in Ruby already, so I figured, you know, I'll save some time. So I've done this. Now, using that Adobe Connect client, I was able to fetch some data, such as the asset name, the URL to obtain it, its duration, sometimes, and it turned out to be inaccurate, too, but, you know, it looked promising to begin with and who owns it. So for that portion, I've used the Adobe Connect Ruby client. Of course, this is not an official Adobe client. Someone just had that problem and has written a solution for it, but it was a good solution. So if you ever need it, I recommend it. I can recommend it. So obtaining the assets are essentially to properly appreciate what needs to be done. Like I said, there's no one video file kept within that system representing the entire session. It's segmented, so there are files for the audio, others for the video, and there are multiple files for each of those, and then you've got the widgets. So assembling audio and video files is very easy with FFM-Ped. That's not a project. That's like five minutes' work. But because of these additional FLVs, and let's return to what this thing looks like, so these pods, right, these are FLV files where the data is stored and the data cannot be extracted without writing code in flash, which I certainly did not want to do. So I had to find a different solution for it. The solution I came up with, I figured, okay, fine. So I'll use Selenium, I'll launch a web browser, I'll navigate to the recording and load it using their SWF file, right, their software, their Adobe software. I'll play it using that software, and I'll record the file using FFM-Ped and X11-Rub. So I'm essentially automating the process of navigating to that URL, and then recording the whole screen display, the X screen display, using FFM-Ped, and then I manipulate that file further as I'll walk you through. So this is what I did, and it worked quite well, worked quite well, but I had several difficulties. So let me show you what such a file looks like for those of you like me who lack imagination. Usually people ask me to increase the font, is that all right? Can you see okay? Right, so this is a typical archive, they do vary though, so each asset I've found had, well not each, but there are several different formats I had to tackle in order to get this done, and also they've had a combination of various versions of Adobe Connect, so eight and nine, and the format wasn't quite the same, and metadata was slightly different, you know, the usual proprietary wrap was fun. So as you can see we've got FLVs, and we've got XMLs. So FLVs, some of these are actual valid audio and video files. These are easy, because again you can use FFM-Ped to merge them and that's fine, but the rest of these are not strictly speaking media files. The FLVs containing data used to display the chat box, and the file sharing widget, and all these other pods that you see in conferencing software. Now these, without using Flash and reverse engineering their format, which of course is not documented anywhere, could not, I couldn't get to the data, which is why I had to do this. So I figured okay, because the duration, seeing how we're recording it using the FFM-Ped X11, right, so we don't really know when the playback begins even, right, so to start with, when you load that SWF onto a browser, first thing you're confronted with is this, and this takes anything from 30 seconds to five minutes to finish. So in fact, you don't actually know when the session started playing back to you, and you're doing it all automatically, you're not, you know, a human being watching it, so you honestly don't know. So my first problem was to discern when it actually started recording the session, right, because I had to, I had to trim to begin in the Adobe Connect Connecting rubbish. So I thought I might need to do that using FF Probe, that's part of the FFM-Ped toolset. How many people are familiar with FFM-Ped by the way? Quite a few, good, so you know what that is. Essentially it gives you metadata about media files. Now I figured I might have to just do FF Probe, looking at two frames at a time, comparing these and this, discerning when this stopped, this connecting thingy, and you know the recording began, but after doing some research I've discovered that FFM-Ped has a lovely, lovely feature called scene detection, and it works brilliantly. Now I'll show you how it works. So this is our command. Can you see that? No, I know what I'll do, hold on. We're talking about this one. Okay, so what this does analyzes the frames in the video to determine when the scene changed. This is good because you know it shows the progress bar with the connecting for a while and then the first screen loads up, so it's a massive scene change. So all I need to do is find the point in time within my recorded video, the one I've done the screen display with, to determine when the video actually started and trim everything beforehand, all that loading rubbish. So I'll show you just a demonstration of the scene detection feature outside of this context, because again I can't do what. Let's run that command, and what it does, well the way I've done it, is put the scenes into Big Buck Bunny JPEG. So that's originally that was, that's a video file, it accepts a video file as input, detects when the scene changed and captures the frame. Okay, so what you see here is essentially a trail of the changing frames. Now you can adjust the sensibility of that. So for instance here, my choice was, if you can see okay, 0.1. Now if I were to change that to say, let's just look at what it looks like now again. Alright, so these are all frames that changed within that video, at that level of sensitivity. Now let's be less sensitive and do 0.4. So we got fewer frames, okay. This feature is very handy, very handy and it's exactly what I needed. So that helped me. Okay fine, so now I knew when it actually started playing back. So I could trim everything that's beforehand, all that connecting thing. Now as far as the duration of the actual recording, I thought I could get that from their API. Sadly that turned out to be true for about 20% of the assets, and the rest of them didn't have it. So I'm like, blimey, how am I gonna get the duration then? And then I thought you know, I might have to record like hours and hours and you know, then trim all the rest of it and throw it to the rubbish bin, and then I figured okay, but I do have the audio files and these I can merge together using FFMPEG and then I can prob it for the assets duration and presumably the audio track and the video track would be roughly the same. And I say roughly because I don't mean that, why? When you start a conference, no one's ever ready. Have you ever attended the conference, a virtual conference? It goes like this, hey, can you hear me? No, we can't do. Now, no. And now, no. And then someone says, yeah, I can hear it. And someone else says, I can't. And you know, that takes like half an hour. And then they start their presentation. And then they lose the audio. And then they get it again and so forth. So it's roughly the same size, but not really. But you know, this is what I've had to work with. So if I could obtain the duration from the API, when I could obtain it, I did. But otherwise I had to use this method. So I essentially downloaded the archive I've shown you before, which was easy to do, luckily. And then I have just probed the, I've assembled the audio files, because there are multiple ones. Then I had a complete audio track. And then I've probed it for the duration. And this is how I knew when to cut the recording. Now because of that loading period in the beginning, I had to do a buffering. So two minutes from the beginning and two minutes towards the end. So if my duration was originally an hour, I recorded an hour and four minutes. And I was usually okay. And then, except for all the times it wasn't. So then I recorded eight minutes. And that was fine. And then after I've had that, I figured okay, but I, the customer had over 40,000 assets. I'm like, I can't do this one by one. I need parallel processing. So I thought, okay, that's fine. I've used the X video frame buffer run utility. Do you guys know it? Oh, you should do. Get ready to be excited. So what this does, allows you to run X 11 applications, like practical applications, on the frame buffer. Okay, so you can open multiple virtual X displays and run your browser there and record your session. Now, this is brilliant because it means that I'm no longer limited by anything apart from hardware resources. However, there is a caveat. The audio is a problem when you go about it that way. But that's fine because I've had the audio already. So I figured okay. So I just need the video display. I'll use Firefox and their swap for that. I'll record it. I'll trim it according to the length of the audio track. I'll merge these two together and we're done. And that almost was the case. By the way, are you following okay? Am I going too fast? Am I being coherent? Good. I do try. All right. So back to the flow. So first, download that zip archive I've shown you. Use Selenium and the Mozilla Gecko driver. Launch Firefox. Record the session using their own software. Okay. Then once that's done, use Scene Detection to discern when exactly the session started playing back to us. And then merge the audio and video files together. And the easiest part of all, upload them to our platform. That's very easy to do. I promise. All right. So parallel processing, we were there. Okay. So we had that utility. I had to make slight changes to it. But nothing major just to properly support parallel processing. If you need that sort of automation for other things you're doing, you should definitely go for that. And you can grab it from the repository for this migration tool. Now, like I said, the number of jobs is only limited by hardware resources. So the more resources you've got, the more parallel concurrent jobs you can run. Now, okay, using that method, we were able to obtain a recording of a video that looks exactly like that, only moving. Okay. I can't show it to you, sadly, again, but. So just, you know, the presentation, the chat, so forth, and the audio track. And that was fine. But our platform also has what I find to be a very handy ability, which I can demo using their assets, but one of our assets. So this is our lovely founder, Michal, and the one who's asked me for help late at night. Now, as you can see, we've got slides, and we've got video, right? And they run both in parallel within the player. And you can toggle the display, and you can say, show it to me like that, or show it to me like that, and so forth. Now, what are these slides? They're essentially images, right? They're thumbnails of the presentation. And I said, okay, so it would be very nice if I could extract these images from the video and, you know, process them as thumbnails so that they, we could have that sort of display, right? So video and slides on the side. Now, also, these have metadata, so you can search for it, which is brilliant. I don't always have the patience to watch videos. I must say, I mean, I work for a video company, but I'm a very impatient sort of person. So if you can search for something within the video, that comes in very handy. And these slides, each of these thumbnail objects, we call them cue points, can have metadata. So that's more productive. It's more searchable. You can find more content and discover it that way, and you can search for it and jump to the right slide and so on. So I wanted that. Our cue point objects are, like I said, an image representing the slide, the title for the slide, description, so basically the content within the slide. It's our time in relation to video. And that's it. So I started thinking of a way by which I could harvest that data from the recording I've obtained by means that I've already described. And I figured, okay, so let's remember what it looks like. It looks like that. So I said, okay, I need something that would detect all the squares or rectangles within a frame, a given frame, and then I could extract, I'll get the coordinates for this, essentially, right, for the slide widget. And then I could process that. I could take that, create an image out of it, and upload it to our servers and set it up as a cue point. Now, you may be thinking, okay, well, couldn't you just get the coordinates once? And that's it. Why do you need to detect anything? Because you're allowed to pick the elements within Adobe Connect, right? So if you're the director of your little session, you can move that widget to the side. You can expand it. You can delete other widgets and get rid of them. You can minimize some of the widgets. So it's not a constant. And so dynamic detection had to be done. For that, I've used OpenCV. And I've created a small demo for you. Right. So this is what it looks like. Now, let's detect. So these are all the squares within our frame, okay? So naturally, I need this big one. Okay? Now, how do you know which one to take? That can be a bit complex, right? I went under the assumption that it will be the second biggest. Why? Because the first one is this, right? The browser screen. And then the second biggest is the presentation widget. This is almost always certain. So, you know, within a margin of error, I was able to get that done properly. An alternative to that, by the way, eventually the customer didn't like all my fancy-schmancy features. So he said, this is very nice and impressive. I liked it better beforehand. And, you know, you've done a good job. Shut up. So I said, yeah, fine. I can take yes for an answer. So, okay. So I dropped it. But thinking about it, another thing I could have done, because open CV can also do text detection. And it can actually find words within a frame. And because presentations and slides usually consist of words, I could say, okay, find a very big square within certain limits. And then verify that it has words within it. And then I could be almost positive, because it's a big square and it has words in it. It's like how you describe a presentation for someone who's never seen something like that, right? It's just a bigger square and it has images and words, okay? That's a slide. So, something like that, right? Now, there are problems with this approach and I'll show you some problems, because we all love problems. So, this looks nice, okay? That's my presentation. That's my slide. Next up, this is a bit of a problem in me. You see that? It's no longer a square. It's like something. So, it's not guaranteed. This method, depending on how you've built your slides and your film for the slides, may eff up slightly. But generally speaking, even so, I mean, it would have taken, in this case, the big Q, the big R, semi-rectangular here and processed it. So, even though a portion of it is cut, it's still, you know, you can make it out. So, it's not that bad. But it's not a perfect method. Now, you see here, same thing. Why? Because let me show you the original again. Let's do them side by side maybe. So, one minute. Right. So, that's the detected one and that's the original. So, you see, it gets confused because of this line. That's part of this slide. It's part of the theme for the slide. And that confuses it in this thing. So, it's not bullet proof. But it's nice and it's still beneficial. For the metadata, I've found that they adobe keep XMLs with the actual strings within the slides. So, I've used that. That was easy enough. And why? So, essentially, I've used the scene detection feature again and I said, give me all the changes. And then I knew when they changed the slide because the screen is mostly static. Let's remember again what it looks like. That's mostly, nothing ever changes dramatically when they give a session apart from their switching the slides. So, you know, the chat gets a bit of action but we don't care about that. So, basically, I'll show you the code for this. How much time do I see it? Sorry. Nothing. I've said nothing. So, essentially, what this does, runs this command to detect the scenes, FF probe, show frames, so on and so forth, select and then scene and the level of sensitivity. I didn't want it to get too high because then slide changes in the other widgets affect it. So, after, you know, trial and error, this is the best setting for this sort of project. And then I grab the pkt, pts, frame time and that's what I need. And I put that into a text file. And I just do that and then what I get in the end is something that looks like this. Hold on. Put it. Okay, like this. So, just the numbers, the time we're in the video when a change of scenes happened. Okay? Now, when I take this, once I have it, I can use a fan peg to extract an image out of that frame. And that's what I do, same as I've shown you with our big bunny video, right? So, detecting the scene changes, then creating an image out of each frame when the scene has changed. And that gives me the slides, basically. So, simple enough. Okay. I've explained it. Some people deserve it. Okay. Are there challenges? So, like I said before, I've had this issue with when to, when has the recording actually started and ended? For this, let's view the, let's look at the code a bit more because we have time, which is nice. I'm usually out of time. I never get 50 minutes. I don't know what happened to Christoph. He never gives me 50 minutes. Oh, it's 20. All right. Thank you, by the way. So, we start with the AC Rapper, ACB Adobe Connect. Let's look at what that does. Can everyone see okay? Yeah, good. Right. So, checking for some necessary utilities. X-Video Frame Buffer Run and X-Video Frame Buffer Safe is my code, attending to the concurrency issues. Okay. So, I needed that. By the way, the repository, it's in the resources slide, but just to show you anyways. It's AGPL. So, you may use it freely. Naturally, we'd like you to migrate to Kaltura, but it's not mandatory. I mean, if you just want to get a flat file for the video recording, you can do that. The general algorithm could be adapted to other proprietary platform for web conferencing. I don't know, just WebEx, for instance, or others, where similar action is going about. It's going to be different. The format is going to be different, but the overall idea is similar. So, AGPL, like I said, there is a rather extensive RIDME, a nice setup script too, so you don't really have to work very hard. And of course, contributions are most welcome. So, if you're interested, just any pool request we do. What I wanted to show you was the XVFB Run Safe. Now, this is an all-purpose script, so if you need that sort of automation, you can just grab that. Runs standalone doesn't have any dependencies other than XVFB Run and whatever, that brings along with it. Okay. All right. So, we start by setting the max concurrent prox we're willing to run. Again, that varies depending on your hardware. You can set all of that in RC file. All of these values are just plain shell environment variables, so it makes it very easy. If they're not set, sometimes I set a default for you, like in this case, 7. Why 7? I don't know, it's a nice number. I mean, I like 73 best, but that's in my tie, so I want it with 7. Right. So, we start by processing the list of recordings we want to migrate. That can be a mighty long list or a short list, doesn't matter. But you just said it's a very simple format. I didn't want to get too fancy. So, SEO IDs, like their asset ID, okay, every asset session has one. Category name, I've used their structure for that. So, if you're like a university, you'd have, I don't know, CS faculty and then machine learning and then, you know, so subcategories. So, that's the one. And our platform, of course, also supports categories. So, we could migrate the content and create these categories as to sustain the same structure the original system had. All right. Meeting name, its description ID, blah, blah, blah. And then the date it was created. And the owner for that video. So, our system, by the way, also supports custom metadata. So, you can create whatever fields you'd like and expand the scheme. So, whatever they've had, we were able to easily migrate onto ours once we've fetched the data. So, that was easy. Okay. Then, we launched the frame, the XV frame buffer utility. We launched it as many times as we've allowed with the max concur props. And if we've exceeded that, then we're just waiting, you know, for something to terminate the finish. So, that's that. And that calls another script. And that's the Ruby part. So, that's AC new. Why new? Because I first called it ACRB and then I refactored the hell out of it and now it's called AC new. I'm lazy. Anyways. Right. It doesn't have many dependencies. So, a bit of JSON, Selenium, you know, for opening the browser and so forth. Open free. Fine. And logger and our own client library for Ruby. So, that we can ingest the content that we've created by following this procedure onto our platform. Like I said, this is not mandatory. So, the platform is also capable of just producing the file. I am very much opposed to vendor locking. I don't know if you noticed. So, I created it in a way that if you use other platforms, we're not utterly elastic. All right. Now, as far as what it does, I'll briefly walk you through it. So, we need the AdobeConnect endpoint, which we'll use in Firefox later on. A bunch of other variables. So, where to output to, the meeting name. These are all exported from the wrapper script I've shown you. Right. Or maybe we can show them side by side just for fun. All right. So, we basically export these and then the Ruby file reads them. We verify that we've got everything we need and we get cracking. So, we check if there is an audio tracking that zip archive and we process start like I've explained before. And we start by getting the iteration for that. Okay. And we delete the new line from obtaining the duration because this is done, you know, we run it by command line and then we have to trim the new line. Then we report failures because we're nice. Okay. This one wraps the screen display of the browser playing the asset using their Adobe SWF. Let's dive into this function. Okay. So, accepts the path to FFMPEG. If nothing is passed or set in the environment variables, just uses whatever it's in the path. Word of advice, FFMPEG 4, far better. And it's the latest, use the latest if you can. I also have a side project for building FFMPEG completely statically, which comes in very handy. So, if you ever need that, it's this one. And that would produce one binary file for FFMPEG and other for FFPROBE, utterly without real-time dependencies. Now, naturally, this is ill-advised in many situations, but sometimes you need two and when you need two, you can use this one. Yes, everything. libstdc, libc, whatever you want it includes. My mom's in there. She's a bit of a nag, but she's a nice woman. Hi, mom. Right. I shan't show it to her. Okay. So, anyways, x11rub basically just calls this command resolution frame rate. We've gone with something rather traditional. I'll show you where that's defined. I don't want to be too fancy. So, 30. We've gone with 30 and that was all right. Resolution is this. Okay. Now, this is interesting. Why do I have to crop? I had to crop because there was a bug in the Selenium Mozilla Gecko driver that when in completely full screen, it crashed quite miserably. It was so sad. So, you know, I reported it and naturally eventually it got fixed long after the project ended, but it got fixed. But in the meantime, I figured, okay, so I shan't do full screen and therefore I had to trim the, you know, the browser's bar, you know, the address bar and all the menu options and so forth. So that's the cropping. So I just crop a few pixels. I shaved them off the top where the menu bar and all that propositions. While I'm talking about why this image comes in very handy. Hold on. Let me show it to you again. Yeah. So, you know, this portion, okay, the window frame, the window decoration. I had to crop that out. Cheers. Okay. And then if that worked okay, then we're happy and we're continuing on and if it's failed miserably, then we log the error and we hope someone knows how to fix it. Okay. Next up, after getting the screen recording comes the scene detection, right? Like I said, we need to find the first scene after the bloody progress bar. Let's all be reminded of that progress bar. This one. It's a beaut. So we need to find the first scene when that's changed. All right. Next up, trim the video. So, right, we've got the first scene. So we say, okay, let's say playback started at all one minute, 39. So we're throwing away all the first minute and 39 seconds of the recording. Right. Good. And we use the audio file duration as basis for our expected duration. Usually, there are some very interesting cases, but I won't go into those now. Okay. Audio track, merging these two, checking rather or not. And then we're done with processing the file and we have one cohesive MKV file that can be played with most modern video players. And now we can ingest that onto our Kultura platform if need be. So if these variables were set, then we do the ingestion. Now, just in with Kultura is very, very easy. I'll show it to you because A, we're a very nice company and B, well, you know, they paid for the trip. So, okay. There you go. So we get a client, the entry name, which is usually the meeting name, the Adobe meeting name, meeting ID, which we set as a tag for reference, right, so that whenever someone opens it in our platform, they'll be able to tell what the source was for that on Adobe Connect. And our platform also supports chunked uploading. So if the asset is very big, you can split it into smaller files, right, and then upload them in parallel. And then naturally it makes for a faster uploading time. In this case though, because these files are very small, it's essentially just, you know, one slide that changes. So they're not very massive at all. Like a huge recording of two and a half hours was about 17 megs. So we didn't really need chunk uploading. It's an option. Right, then we create an upload token and we create a new media entry in our platform. And we use that token to ingest the resulting recording of the session after all these previous manipulations. And that's it, pretty much. That's the process. And we create some additional metadata, like I said, to comply with what they've had in Adobe Connect. Right, I reckon I'm done. So I'll break for questions. I was either very, very precise or very, very boring. Okay, I don't have an important question, a very boring question. Do you have to use DOS 2 Unix often in your shell scripts? Sadly, because I get input from loads of Windows users, yes. Yes, sadly. How about you? We can come in a bit later. I can see a fellow sufferer. I can always spot them. Another question. You talked about 40,000 rollouts. Are the scripts run by users or where do they run? So the customer wanted to run it by himself and I was fine with us. We just gave him the code. And a colleague of mine, a lovely, lovely friend, by the way, that reminds me, let's do the credits before I forget because I'm notorious for forgetting and I'll never forgive myself. So sorry, just one second. So first of all, thank you, thank you, thank you for all the open source projects that I've used. I couldn't have done it without them. Next up, my friend and colleague, Hila. So she's joined this project after the POC was done and she's been amazing, supporting the customer and helping them troubleshoot their own issues and so forth and investigating the few assets that didn't all work out. So Hila, I hope you will watch it. Cheerio and well done. And also to Jack Sharon. He's our solution architect. He's done some research and he's always believed that this is possible and thanks to Jack too. Go on, sorry again. So I mean, do the 40,000 users run it or does it... No, no. One person running it on 40,000 entries. They've set up several VMs all running Ubuntu, even though we're not limited to Ubuntu necessarily. But they've used Ubuntu 16.04 and they've set it up. There's a very simple setup script. I'll show it to you. Let's find it and then we can show it. Seeing how they've chose Ubuntu, I've created the script to support that. This. Okay, so can you see that? Okay. It just deploys all the necessary dependencies. Naturally, it needs the flash plugin to load that SWF. A few others are very easy to set up and then just set up your environment variables according to your needs and you're done. And then so they've run that and then they've just run the wrapper, given it a full list of all their 40,000 recordings. Naturally, they divided it among several machines. Are there questions? Anything else to add, Jess? There are utilities I have to use. That's in response to your question. Just thank you very much for attending and I hope you enjoyed it. Thank you, guys.