 The following is a CoderLand presentation about our newest attraction, the Compile Driver. Hello, this is Doug Tidwell for CoderLand. In this video, we'll take a look at a serverless function that manipulates image data captured by the webcam next to the Compile Driver. Given a picture of a happy guest, the function adds a message, a date stamp, and the CoderLand logo. As you can see, the results are striking. To get started, clone or fork our repo it's available at the URL on your screen. The code follows the common function as a service convention of using JSON to move data. Obviously, we're using binary data here, so we have to base 64 in code and decode it as it moves from the CoderLand swag shop to your serverless function and back. The code is a Spring Boot application, so we'll use Java's built-in libraries for the base 64 stuff. And we'll use Spring's Jackson JSON library so we don't have to worry about JSON syntax. The majority of the code in the function does the image manipulation as it should be. The JSON structure used by the function contains six fields. The most important is image data, which contains the base 64 encoded pixels from the image. There's also image type, which indicates if this is a JPEG or a PNG. The other field you might want to use is greeting. That's the text that's written at the top of the image. Beyond that, there is a date format string, a language, and a location. Basically, we took everything hard-coded in the function and made it a field in the JSON structure. The repo contains a file called sampleinput.txt that you can use for testing. Feel free to change its values and see what happens. To make life simpler, we created a Java image class that uses the Jackson library we mentioned earlier. The members of the class are all defined with the JSON property annotation, so the JSON is automatically parsed and turned into a Java object. When we define the serverless function, we say that it takes an image object as input and returns one as its output. Jackson handles all of the JSON work for us, so we simply use the objects. Next, we need to set up the method that handles the post request. The post mapping annotation tells Spring Boot that this method handles post requests for the overlay image endpoint. We're also saying that we use JSON in and JSON out, and there is a cross-origin annotation to handle any cores issues that might occur. From here, we'll breeze through the actual image processing. We create a buffered image object from the decoded image data, then we create a canvas. We draw the decoded image onto the canvas, then we create an alpha channel. The Coderland logo is loaded as another buffered image object, and the logo is drawn onto the canvas centered vertically along the left side. Finally, we need to draw the text of the greeting and the date stamp. We use font and font metrics objects to center the text on the image. One thing we don't do is make sure the text actually fits on the image. I'll say that I left that as an exercise for you, the home viewer, but to be completely honest, it was just laziness on my part. Another character building opportunity, the code originally looked for Overpass, which is Red Hat's official font, but when we deploy this code to Knative, it has to be packaged as a container image. The base OpenJDK image doesn't have that font installed, so we just went with the generic Sans Serif font. If you figure out how to modify our docker file to install Overpass, we'd love to see how you did it, or even better, just send us a PR. Once the greeting and the date stamp are drawn, we write the image data and encode it. The last step is to create a new image object and return it to the caller. Again, the Jackson library handles all of the JSON mangling required, so we don't have to worry about it. That's how we do the image processing itself. If you'd like to try the function, switch to the directory containing the code, run maven clean package, then run the jar file. We assume you have curl on your machine, run the curltest shell script or curltest.command, depending on your platform. This sends the sample input text file to the function, redirect that output to a file to save its results. Unfortunately, once you have those results, you'll need to extract the base64 image data from the file and decode it manually to see the modified image. That's a little clumsy, but wait, it gets better. Thanks to the amazing Don Shink, you can test your code using a React front end. Here's what that front end looks like. You give the React application access to your webcam, click the button, and you should see the results instantly. Is it not awesome? You can get Don's code at the URL on your screen. Once you've cloned his repo, switch to that directory and run npm install, then npm start. This is a React application, so you'll need to install Node if you have it already. When you type npm start, the system will open a new browser tab and the app will ask your permission to use the webcam. The front end is at localhost 3000 and it assumes the image service is at localhost 8080. You can set the environment variable ReactAppOverlayURL if you need to change the location of the image service, which we'll do in our next video. That's as far as we'll go for now. We've got a function that does the image processing we want and we've got a lovely front end to test it. In the next video, we'll look at how to deploy the image processing function to Knative. In a nutshell, we need to build a container image from this code and tell Knative to load that image and manage it. That part is simple, but getting Knative set up can be tricky. Thanks so much for watching. For Coderland, this is Doug Tidwell saying may all your bugs be shallow. Cheers.