 Hi, this is Allison Sheridan of the No Silicast podcast, hosted at podfeed.com. A technology geek podcast with an ever-so-slight Apple bias. Today is Sunday, April 23rd, 2023, and this is show number 937. This week, our guest on Chitchat Across the Pond is your favorite psychological scientist, Dr. Marianne Gary of the University of Waikato in New Zealand. Dr. Gary and four of her colleagues published a paper recently in the Royal Society Open Science, and it was entitled Trivially Information Semantic Context Inflates People's Confidence They Can Perform a Highly Complex Skill. It's a big fancy title, but the experiment builds on previous studies where people have demonstrated that people have highly inflated beliefs of their capabilities doing highly complex tasks for which they are entirely unqualified. In particular, a high percentage of people have high confidence that they could land a commercial plane with no help from the tower if there was an emergency and the pilot was incapacitated. In the study that Marianne and her co-workers did, they tested whether watching a short, trivially informative video of two pilots landing a plane would influence that confidence level. Would it make them more confident if they saw a video that gave them no instruction whatsoever? Would that change their confidence level? As always with Dr. Gary, you'll learn a lot, you'll laugh along with us, and your dreams will be crushed as only she can. You can find chit chat across the pond light in your podcatcher of choice. Of course, there's a link in the show notes. When I was 12 years old, my family had a collie puppy named Charlie who was in sore need of some training, so my parents signed me and Charlie up for obedience classes. But it came time for the final graduation ceremony, which involved the incredibly complex procedure of walking around a circular track. Charlie stopped halfway through and he pooped on the track. For that, Charlie was awarded the most improved trophy. And I actually still have that trophy. There's a picture of it in the show notes, but it's actually right behind me as I'm recording for the live audience. Anyway, I tell that story because I'm here to give Tesla's full self-driving beta the most improved award. For those who haven't been following along, about a year ago, Steve and I were both able to get into his full self-driving beta program by competing against a rating system in the car and achieving 98% or higher on the test over the course of a couple of weeks. Full self-driving beta means actually driving itself on city streets, not just freeways. Now, after testing full self-driving for a few weeks way back then, my assessment of its driving skills was that of a student driver who was also drunk. Seriously, it was bad. But full speed through dips when it made left turns into roads with a median separator, it would just drive right at the median forcing us to rest control from the car. It was super tentative turning into intersections and it kind of would go, oh, it's terrible, terrible turning. It came to a stop at lights way too quickly and then it accelerated way too slowly. So you're basically sure somebody's going to be honking at you because it was so slow getting away from lights. One time halfway through a turn, it actually just gave up and gave control of the car back to Steve and that was really nerve-wracking to do it partway through an intersection. One of the many things I enjoy about owning a Tesla, though, is that they send out regular software updates even if you're not on the beta. Makes the car feel new when it happens. A great example was when they added camera views from the side of the car to the display. Now when we engage the turn signal, we get a rearward view to see the lane beside us. It's a much safer way to see the lane you want to change into rather than turning your head around and speeding traffic. Now, Tesla often moves things around on the display, which can be a little annoying. They move the garage door opening button pretty much every time they do an update. I think they do it just to keep our minds sharp. Anyway, when Steve and I tested full self-driving beta, it was FSD version 10 and they've been sending out minor updates for the past year or so, which made very slight improvements in the full self-driving experience. I'd say the student driver maybe stopped drinking hard liquor and instead was only drinking high alcohol content IPA beer. Better, but still terrifying. Just recently though, Tesla shipped out FSD version 11 and it's definitely a marked improvement. I'm ready to declare that while it still feels like a student driver, it's like a good student driver who is actually completely sober. It's that much better than FSD 10. I'll still call it a student driver because it doesn't drive quite like an experienced driver and I'll get into some details of what I mean. I decided to take a drive and I'm going to illustrate these improvements by describing this recent trip where I let the car drive me to the Apple Store. First, it drove west down my block and it gently stopped at the four-way stop sign. A pretty good stop. Now, Tesla got in trouble recently for letting the car do a rolling stop, it was only less than five miles per hour while driving, so now the Tesla comes to a sarcastically complete stop. In any case, it waited till the cars who had gotten there first took their turns and then with no hesitation, it accelerated appropriately into that right turn. None of that ear, ear, ear that it used to do before. Now, after a couple of other turns, it drove up to a stop light and it waited for the light to change. Now, you have to really pay attention even when waiting at a light and absolutely go as soon as the light turns green and as soon as the cross-traffic clears. My car and I wanted to turn left, but there was a car straight across from us and my car pulled a little bit into the intersection as a driver should and waited to see what the other car opposing us was going to do. The driver had their left turn signal on, but they did not move into the intersection. The Tesla waited and didn't make a move. At this point, I took over and I executed the turn myself. I'm not going to count that against the car as a mistake because how many times have you seen someone with a turn signal on who makes a completely different move? You can't count on that car to be turning, so I'm not going to count that as a mistake, but I was a little worried that people would be impatient behind me and I made the turn myself. I re-engaged full self-driving and then we traveled down a very wide road that has a fair amount of traffic on it. It's pretty busy. In the middle of the block, a pedestrian dashed out into the street and the car gently slowed down well in advance and when the pedestrian had finished crossing appropriately accelerated back up to the speed limit. I was particularly pleased with this maneuver. Not only did it not kill the pedestrian, that's table stakes right, but it also didn't panic. In FSD 10, it would often set up an audible alarm and slow down quite violently at the slightest provocation. It was quite a nerving when as a human you could tell the situation was not a crisis well before your arrival. So it was really great to see FSD 11 treat this potentially dangerous situation cautiously, but not overly so. Another bothersome thing with FSD 10 was that it was truly terrible when making a right turn into traffic. When the intersection was clear, it would do that thing of inching out in little jerks, much like a student driver, eventually starting to make the turn but then over-correcting to the right and then back to the left to accelerate. In my test drive, the car needed to turn right onto a very busy two-lane road. It was also tricky because to our left, the cars were coming over a hill which reduced the time to react. I'm happy to say that it did admirably. It inched into the intersection just enough and when it had the visibility it needed and saw that the road was clear, it accelerated quite rapidly into the correct lane. It was very comforting for it to do it quickly. You know what I mean? You could just go and it did. In Tesla's, I think even without full self-driving, you can enable a feature where it sends a little bong sound when the light turns green. It's pretty accurate if you're going straight ahead and the light turns but it wasn't very accurate on the left turn arrows. Might tell you to go, send you the bong when you're in the left turn lane but it was the straight-through light that had actually turned green. We, the Tesla and I, drove up to a red left turn arrow. Until the left turn arrow turned green and then it accelerated very smoothly into the correct lane. Now remember I said earlier that FSD10 had trouble recognizing medians on left turns. I was very relieved not to have to rest control my car as it cleared the physical median with plenty of space. I drove a bit farther and as I came up to a red light on this two lane road someone in a large SUV parked right at the intersection in front of a fire hydrant. Not only that, he opened the driver's side door all the way up and got out of the car and stood there in the lane. At this point he was blocking easily a third of the lane that I was currently driving in or I should say the Tesla was driving in. The Tesla did not panic. Notice that the lane was blocked well before it would require a hard stop and instead rolled to a stop a good 10 feet before this silly man. Now this man continued to partially block our lane so the Tesla started trying to edge to the left and it didn't seem super confident about the maneuver. There was also a line of cars coming up along on the left of me so I decided I'd give it a hand and I took over. Again it didn't make any mistakes but I just got a little bit anxious and took over. Now remember I said that FSD-11 is like a student driver even a good student driver but it doesn't drive as a seasoned driver would. A good example of that was when we turned onto Sepulveda Boulevard which is a major thoroughfare of three lanes. The left and middle lanes travel along pretty nicely but the right lane is problematic. You can see it has lots of dips for rain gutters. We had rain this year so don't make fun of us but we do have rain gutters just in case so it has lots of these dips at every intersection and Sepulveda has lots of business entrances and intersections where cars slow down and make turns. It is easily the worst lane to be in. Nobody wants to drive in this lane. Well, the Tesla really liked that far right lane. I signaled to move to the center lane and it obeyed me. It made a very smooth lane change. As soon as I driven about a block in that center lane it said yeah I'm going back into that right lane again. Now it wasn't technically wrong but no human drivers were choosing that lane for obvious reasons because it was an annoying lane. I let it drive that way for the rest of the ride and it made no errors. While I still find it stressful to drive it self, overall I wouldn't say it made any outright mistakes on this particular drive. I did find that it drove faster than I'm comfortable with at certain times. However, every time it seemed too fast I checked and it was driving at or under the posted speed limit. I think I might be a little old lady driver so take that for what you will. Later, Steve and I took a drive together in my car where he was in the driver's seat driving in control. In this drive we experienced more of those I wouldn't have done it that way events. For example, there were a couple of areas where we know the traffic backs up so if we were driving we would get into the correct lane a mile or more before an upcoming turn but the Tesla would toodle along in the wrong lane until it was actually necessary to change lanes. Again, not technically a mistake but it wasn't what a human driver would have done that was familiar with the roads. I guess it isn't familiar with the roads, right? It's figuring it out as it goes. Now perhaps the most obvious example of student driver feel is on curvy roads. Experienced drivers will hug the inside of a turn but the Tesla always goes for the middle of the lane no matter what. So this isn't dangerous per se but it feels like loss of control as though the car is going to slide out of the curve like it's not going to stay tight, you know normal human drivers that we tug the inside of that lane so it was again just not the way we would have done it. Now it also has trouble when lanes get super wide. On the particular drive where Steve was behind the wheel there's a park that lets out onto a busy two lane road. At the park exit they widen the lane to allow drivers to merge in more easily. Well all the Tesla knows though is that the lane it's driving in is normal width it's going along normal normal normal and then suddenly it grows to almost twice wide as normal and then narrows back down. The only thing the Tesla knows to do is drive right down the middle so it's to the left because it's in the narrower lane as it got wider it's going farther and farther to the right and then it has to come back in and to the left again. So this could mislead drivers behind it into thinking the car was moving to the right to make a right turn because that's what a human driver would be doing. As it starts to narrow though the Tesla continues straight while still staying in the middle of the lane. Again, not the way a human would drive it. It also made the same mistake it has made on this drive since we started testing FSD. One of the left turns is onto a road with a painted not physical median and the US painted medians are designated by a double double yellow line while it appears to have learned not to drive over the physical medians it ran right over the end of that painted median just like it did on FSD 10. I briefly let the car drive me on the freeway well. With full self-driving working with navigation the car got itself onto the freeway and then tried to move into the car pool lane. That makes sense since Model 3 Tesla electric vehicle is eligible for car pool access but what it didn't know is I never put the car pool access cars stickers on my car. Now you probably mocked me for this but they were purple. My car is red it would have just looked terrible plus I never driving that car by myself when I'm in a lot of traffic so it's not a big deal. Anyway the car didn't know it wasn't allowed in that lane. Now it did do a couple of other lane changes on the freeway and I wasn't completely happy with how it performed. As with FSD 10 it still moved into lanes where it was impolitely close to the driver coming up from behind. Maybe not technically dangerous but definitely kind of like how a jerk would change lanes. We later learned that there are several different settings for full self-driving in FSD 11. You can choose chill, average or assertive. I change it from average down to chill and now it says in this profile your Model 3 will have a larger follow difference sorry a larger follow distance and perform fewer speed changes. I haven't had the nerve to try to get on the freeway but perhaps it will drive a little bit more like an old lady like me with this change in settings. I really hate to think what assertive would be like. The bottom line is that full self-driving 11 is a huge improvement over full self-driving 10. I was beginning to doubt our driverless future but this update renews that hope. A couple of weeks ago a darling six-year-old boy we knew named Caden was killed in a car accident. This is why I believe so strongly in supporting efforts to bring us true self-driving cars as soon as possible. Well I'm kneeling on the floor with Helena Harrison from a company called Glean. She suggested kneeling and I thought that was a lot of fun so we're going to do this interview in a little bit more casual style. Glean is a company that helps people take notes and if you're in school or you go to a lot of teams meetings and you need to be able to take notes but you find yourself distracted because you're taking notes and you miss the point Glean is designed to kind of help you do that is that a good description of it? Yes absolutely yes it will cause all of your audio and then you can add your notes as you go along and you'll be able to come back to your notes later to expand when you've got more time. It takes that pressure away of you having to sort of write everything down during your meeting or during your lecture or your class. So I've seen applications like Notability is a note taking app that also does audio so you can grab a section of your text and then find out what they said right then. This is sort of the other way around the primary focus is the audio but you're putting in short notes to say go look at this is that a good way to describe it? Yes I would actually think that the primary focus is really more your notes because that's what you want. You want to sort of have a really nice summary of your class or your meeting afterwards but the audio is there to really help you expand your notes afterwards. Afterwards but during while I'm in there I can just click a button that says important or follow up or I could just type in confused was one of the buttons that you had in there which I really liked. I could have used that back and listen to that two or three more times. Yes absolutely yes so you can add your notes during yes and it's just recording in the background there. Now where does it do the recording? It does a recording on the cloud so it's a cloud based software but the nice thing about being on the cloud it means that you can access it anywhere so not just on your own laptop but you could access it on your friend's laptop or you could access it on your iPad or your phone. What is the security around that? We have Amazon Web Services is what we use and everything is encrypted at rest and we obviously made sure that everything was as secure as it could possibly could be. Okay that is the right answer to the question. So I'm looking at the interface right now and she's imported the slides into note taking. She's got let's see I'm going to actually press buttons here on the right hand side there's kind of an interesting little interface that shows the audio with little highlights for each section where she took some notes about what was going on in there but it also does let me see if I get it right, speech to text is that right? It does do speech to text yes so after you've actually done your audio recorded once you've stopped your recording you can then click a little button that says convert to text and that will convert all of your audio into a text format. So this is cool because there's a column where the notes that you've taken are so let's see there's a note I'm looking at it says ISS and if I click on that I can see where in the audio that that was spoken and then can I there's a play button I'm going to I'm just guessing the interface I can hit play and I could be able to hear what was said about the international space. So that's pretty cool so you can kind of go back and forth now you did one other thing with the text can you explain that you started in the text transcription and you were able to do something with that too? Yes of course so I can also go through my notes and click on a note and it will take me straight to that part of the transcript and if I think that part of transcript is really important or I want to sort of take it out of my transcript and pop it into my notes I can I can just copy and paste it across so it saves me having to write it all down and then of course once it's in there if it's a quote I'm going to leave it exactly as it is or it might be that I want to go and edit it and sort of reword it and make it my own so. So then do you export your notes from this what do you do afterwards? Yes you can export it to anywhere so basically you copy you copy there's a reading view in here that sort of takes all of the audio away and you're just left with your slides if you've got them you don't have to have slides it's left with your slides and your headings and your notes and then you copy that across and because you're copying it you can pop it anywhere so it's not restricted to Google Docs or Word you can pop it into a nursing journal or a media diary basically anywhere you like that accepts text. This is very cool so what does it cost to use Glean and again this is for business it's for school wherever you need to be taking notes what does it cost? It's $129 a year or you can do a monthly subscription instead which is 12 pounds $12 a year. That actually sounds like a pretty good price I could have used that in a lot of classes I took in college. Thank you very much Helena this was really cool. My pleasure and thank you thank you it's very nice of you to come and talk to me at my stand. Oh and the name of the company is Glean and what is the website? Glean. Oh hang on a minute Glean.co. Glean.co that's the website isn't it? Yeah just Glean.co nice and easy very cool thank you very much thank you thank you very much nice to meet you. Last week after Terry Austin made me by hush for $50 I conveniently mentioned that in my plug for folks to support the show guess what happened? Both John Murray and Bill Reveal went over podfeed.com slash PayPal and they donated collectively more than enough money to cover hush their generosity and show of support is overwhelming. Now I should mention this isn't even the first time the two of them have donated. If you'd like to be awesome like John and Bill please consider giving a one-time donation or donations on a schedule of your choosing to show your support of the work that we do at the podfeed podcast remember you can do it at podfeed.com slash PayPal Remember a few months ago when I spent a stupid amount of time automating the incredibly complex procedure of unchecking a box and previews export window to remove the alpha channel from PNG files? Well the problem to be solved is that images with an alpha channel have transparency and if they're dark images they're impossible to see if the viewer is using dark mode on their device so when I would tweet out or toot out on mastodon a link to one of the shows and if it had transparency in it, if it had an alpha channel it would be impossible for those using dark mode to actually see what I'd posted. Now I've told you about a couple of solutions but this week I solved that problem and a bigger problem using an amazing tool called retro batch boy that's hard to say retro batch pro from flying meat software Now the problem to be solved this week separate from the alpha channel problem is creating effective featured images for my blog post. You know how if someone posts a link on social media expands to show the title and an image? Well people have figured out that posts with featured images are much more likely to cause the reader on social media to follow through and look at the link. If you create your blog post properly in your content management system such as WordPress you get to control what images shown when that link is posted Now technically I can slap any old image I like as the featured image in WordPress but whether it looks good when you see it is a whole other thing. For example if my image is in at least 400 pixels tall Facebook won't render anything at all and if it's the wrong aspect ratio WordPress in cohoots with my theme I'm really not sure who to blame here will crop the image. In spite of spending a lot of time trying to create a repeatable process to make uncropped good looking featured images I have not succeeded until now What I do know from talking to my theme vendor is that if I don't have a sidebar on my theme which I don't then my theme will crop images to 1040x650 Now that means there is no need for me to upload anything bigger than that Now that aspect ratio is a little bit wonky and I prefer 2 to 1 so my goal was to create featured images at 1040x520 Now let's say I find a company's logo for the featured image so they get some visual juice from the review Last week for example, Sandy reviewed a product from Anchor and I ran into the problem that I always run into. The logo file was very high resolution but it was the wrong aspect ratio. At 3357x800 it was more than 4 to 1 Now if I plop that very high resolution image into WordPress as the featured image it gets crops so all you see is NKI Now think about that There isn't even an I in the name Anchor. It's truncated the E so I'm seeing the middle NK and part of the capital E While the process to make the featured image look nice is unpredictable tedious and error prone After literally years of attempts to make it a reliable process about a week ago I came up with a less terrible than the other methods process. Using Affinity Photo I created a 2 to 1 1040x520 rectangle in white and I saved it as a preset So when I open Affinity Photo if I then open the preset I can drag my image onto the new image file and start dragging it around on that white rectangle so there's a white rectangle underneath the image file If my image file is too big like say the anchor logo I have to resize it until it fits either in width or height and then get it centered properly. When I think it looks good enough I have to export the file and save it with a new name and either save this Affinity Photo file or delete it. It's still annoying and it's still time consuming but it's not as bad as all of the other methods I had tried. You might ask why I chose a white background when dark mode folks are people too it's because I had to pick something ok I'm not going to modify the background for every single image Anyway this image size problem is triply aggravating because it shows up just often enough and it always rears its ugly head right when I'm finally done with an article and that makes it even more frustrating. So imagine I spend hours and hours crafting a story and adding screenshots entering alt tags or screen reader friends can enjoy the images. I make sure the grammar and spelling are correct I'm double checking links to sources I publish I push it up to WordPress I throw in the featured image and then I have to stop because the featured image looks poopy. I scream out in despair every single time. Well after the most recent problem with the anchor logo I went to mastodon and I asked my followers is there some way to automate a solution to this. The wonderful Greg Skown responded you might know that name he's the co-founder of Smile the people who make the most awesome text expander software he suggested I take a look at Acorn as it supports Apple script I don't know much about Apple script but I know people who do so I thought maybe it was worth a shot Acorn is an image editor by the way that you probably have already heard of I trotted off to the flying meat website to take a fresh look at Acorn the last version I paid for was version 3 and it appears the developer Gus Mueller has been very busy since I last used Acorn as he's on version 7 now. Anyway as I started looking at Acorn I realized Gus has another app and it's called Retrobatch when I was on the automators podcast back in February Mary brought up the app Retrobatch for automating image manipulation you know me someone says this is fun for automation it's an immediate download for me I didn't have time to play with it right back then in February so I put play with Retrobatch Rosemary on my to-do list I'd remember who told me about it and remember to do it just like the other 24 items languishing on my to-do list it never got done now I did some reading and I learned that Retrobatch allows you to automate complex image manipulation according to rules you provide that's unlike it might be the right tool to solve my problem with featured images I tuted back to Greg they need to help me go in the right direction and I tagged the flying meat mastodon account with my response imagine my delight when Gus responded with a screenshot of exactly how Retrobatch would help me solve my problem I bought the pro version of Retrobatch because the particular task I wanted to perform was going to require the use of rules so some if then else kind of conditions it was also going to require a wee bit of JavaScript by wee bit I mean microscopic bit Retrobatch is $20 while Retrobatch pro is $40 you can do an upgrade from regular Retrobatch up to the pro version if you want we'll get into the differences towards the end of this article but I want to start by walking you through the Retrobatch interface as I describe how it solved my problem the Retrobatch interface reminds me a lot of audio hijack in that you drag nodes little rounded squares onto a canvas and then connection lines appear between them indicating the direction and path of your workflow down the left sidebar are groups of nodes to choose from in the center of the canvas where you build the workflow on the right side is a contextual inspector palette and below that is an image preview window I'll get into the details as we go through the example before I could start automating my solution I had to figure out exactly what I wanted the solution to provide now images that need to be modified fall into four different buckets based on their sizes and the modifications are different for each of these four scenarios that's why I was going to have to write these rules of course being a nerd I drew it up as a truth table now you can't see the truth table because you're listening so I'll describe it to you in words instead the four conditions are number one the images bigger in both dimensions than my targeted size 1040 by 520 in that case I need to scale down once in each direction until both dimensions are no larger than that target dimension the the second scenario is the images wider than 1040 but shorter than 520 I want to scale the width down to 1040 what if the images narrower than 1040 but it's taller than 520 those images I want to scale the height down to 520 finally what if the images smaller in both dimensions than the desired 1040 by 520 in that case I don't want to apply any scaling at all all right with my truth table set aside that gets us the image scale properly but we need to do something to add width or height to the dimension that it where it's deficient and Gus's response amassed it on he explained that the command is to add margin so before diving into these four scenarios at once to follow my truth table I started with one test case the anchor logo that I talked about was both wider and taller than my target dimensions I knew that it was more than a 2 to 1 aspect ratio so I knew that if I scaled the width to 1040 it wouldn't be tall enough so the margin would have to be added to the top and bottom on the left sidebar I flipped up in the group of nodes for read images and I dragged in the read individual files node with this node selected the inspector palette over on the right change to show an area where I could add some representative files to be tested is a great way to run tests repeatedly a multiple test files this is where I would eventually drag in all four different options but for now I just drag the anchor logo over into the inspector palette in the little read individual files node it now says one file that tells you that one file was matched for that node that little indicator can be very important as you're debugging your workflows in retro batch if a node isn't doing what you expect it might say zero files which means it doesn't have a match so it won't function properly for you after I dragged my one file in at the top of the window retro batch showed the warning add a right node to save your file somewhere good to have that reminder but I'll get to that in a minute my next step was to scale the image down until the width was 1040 under transform I found the scale node dragged it into the read individual files right next to the read individual files node when I drop the scale node to the right of the read node an arrow line that flowed it's got little arrows it connected the two nodes together very nicely so I could tell that was going from reading in to doing scaling now with the scale node selected that inspector palette changed to show my options to control how to do the scale I use the drop down to change the scaling for percentage to fix with an entered 1040 okay we're doing pretty good here also in the inspector palette I check the box to tell it only to scale smaller now the last thing I wanted it to ever do is upscale my images if it's too small in any direction or even both I'll add the margin to make it big enough in both directions so far I've been opening up these little groups on the left sidebar like opening transform to find scale but you don't actually have to do that if that seems a little bit tedious you can add a node of your choosing by right-clicking on an existing node and choosing from the pop-up menu I added the adjust margins node right after the scale node doing just that now this is where the wee bit of javascript comes in and I don't think I would have ever figured this out if Gus hadn't spoon fed me through the solution through mastodon adjust margin selected by default the inspector palette lets you define the number of pixels you want to add to the left bottom right and top of the image well that makes perfect sense but when this automation runs I won't know how many pixels to add and I don't know even know which edge is going to require some margin I need retro batch to figure that out on its own so adjust margin also allows you to add a percentage of width height short side or long side or use javascript expression turns out we're going to use javascript expression but don't be intimidated it's super easy the javascriptness of it is actually hidden from us it's really more like simple algebra in the left and right margin boxes we simply type 1040 minus W collectively divided by 2 that means we want the width to be 1040 pixels when we're done so we subtract the width of the image from 1040 and divide it by 2 simple right then we add that value as a margin on the left and right likewise if we want to add half the difference between the height and 520 to the top and bottom we can just enter the top and bottom margins of 520 minus H all that divided by 2 there you've written a javascript expression I promised it was easy didn't I finally I selected the color white again for the margin to be added the final step is to add a right images node to the end of our workflow just like the warning told us to because we read in the image we did a bunch of manipulation we need to write it back out as you might have figured out we have some fun options in the inspector palette for right images we can have the automation ask for an output folder when run we can have the output folder open when the export finishes and we can overwrite existing images if we want we can add a suffix or prefix to the file name as well I was going to add a suffix with something brilliant and witty like modified while you certainly could do that there's a drop down with a massive number of other options remember retro batch is really for image manipulation not modifying silly logos because of that retro batch offers you options for the suffix and prefix to include a suffix or prefix to include things like capture date author copyright keywords bits per channel pixel depth and more in that long list of options to add to the file name I also found image height and image width this is perfect for my needs since it knows H&W we've been using it in our equation I can have my exported images suffered suffixed with H what like the 1040 by 520 so I can clearly tell them apart from my originals now there's really one more important part of the interface for retro bad I want you to know about if you select an image node such as that initial read node or the ending write node in the bottom right you'll see the image preview window for that stage of your workflow so let's say you just have the read node selected I can see the anchor logo with this very wide aspect ratio and it's got no white space up over below it there's a zoom slider to help you see the image at a reasonable magnification for its size in the very bottom right is a little info button that'll pop up what looks like some of the exit data for the image in all that nerdy goodness photographers care about it also in there shows the dimensions of this input file and I was able to see that the anchor logo is 33 57 by 800 pixels so that's my unmodified original item by the way I did zoom down to 22% to see it in that viewing box now if I click on the right node instead in that image preview I can see that now the images around two to one aspect ratio and it has a nice white margin on the top and bottom so if I click on the info button I can see the resulting image will now be 1040 by 520 and we'll have that information tacked on to the file name this is a great way to see your workflow is doing exactly what you want before you even bother having it execute I mentioned upfront that you can drag a pile of images in to test the workflow you've created if you do that you'll see them all lined up in that image preview window and you can tap through them to verify that each will be modified to your desires now it is time for the moment of truth at the top of the window is a play button run your automation or you can use command enter to make it go I was delighted to see my 1040 by 520 image squirt out to the folder I had requested with the suffix that I wanted now my workflow at this point takes an image scales it to the correct width and adds the margin to the top and bottom but this workflow doesn't know anything about my truth table yet so it doesn't know how to do different things depending on the dimensions of the image adding rules was crucial to my workflow because I needed retro batch to figure out whether the images are taller or wider than the desired aspect ratio and only then scale the images and finally add those margins now rules in retro batch look a lot like the rules were familiar with making smart folders and finder or smart albums and apple photos you'll be very familiar with how they look I do want to mention rules again are only available in the pro version of retro batch I do have trouble pronouncing that don't I anyway I created four rules to parse the images into the four scenarios of my truth table using image pixel width and height and whether they were greater than my desired dimensions when I was done with the final design of my featured image workflow I tested it with a bunch of representative files now by a stroke of luck I just happened to include one image that had that pesky alpha channel so it looked really silly after going through the bet retro batch workflow it was the right dimensions it had the white margin just as appropriate but in the middle of the image you can see right through it because it was transparent I created a new retro batch workflow completely separate from the first one to see whether it might be a better way to solve my pesky alpha channel problem than all the other methods I created in the discussion forms for retro batch I found out that to remove the alpha channel you can simply add a mat node my test workflow was very simple with only four nodes read in the image set a rule to check to see if it had transparent pixels if they were present and then slap on a mat if they are and then write out the image easy peasy I saved out my workflow and called it no-alpha.retro batch then I discovered retro batch allows you to export it as a droplet I exported as a droplet and suddenly I had an app I dragged the transparent PNG on my droplet and boom the transparency was gone now I knew this droplet was a keeper so I held down the command key and I dragged the app into my finder windows toolbar now if I ever have a transparent image I want to use on the web I can simply drag it onto the app in the toolbar and boom I am done so freaking easy I am just so excited this is so much easier than all of the other 28 ways I tried to do this alright now that I knew how to fix the alpha channel problem I just added that same set of steps to my featured image workflow so now if it finds that transparent images it just slaps on the mat right before it squirts it out now the workflow is pretty cool looking and it's very readable it starts with the read file nodes then it branches into the four rules of my truth table the first rule goes through the two scaling nodes since those images are too big in both directions the two rules that grab images are too big in one or the other dimensions those only go through one scale node finally the last rule has no scaling at all because it's too small in both directions after going through that process all four rules converge to just one just margins node one mat node and then to the final right images node now the only thing I wish it had was some indication of what the nodes are doing without having to select each node and look at the inspector palette for example every rule node just says rules it would be nifty if I could enter a name for each node like too big in W and H I ask us whether there was a way to name the rule nodes or add some kind of text to them so you could see which rule did what at a glance without opening the nodes I was delighted with his response he wrote back you know that's a good idea I hadn't considered and nobody has asked for it yet I'll see what I can do about it in a future release how cool is that anyway after I run all my tests I exported my featured image workflow as a droplet and it also earned a treasured spot in my computer toolbar I am so excited about this workflow and I couldn't wait to use it I went back to the post about hush by Terry Austin because the hush logo looked really poopy on my featured images the logo is an 850 pixel rounded square which is super high res but my theme and wordpress crop the top and bottom of it I drag the hush logo onto my featured image droplet and drag the resulting image into wordpress and now it looks fantastic all right now that I've told you about how I used retro batch to solve my problem let's chat just a little bit about what else it can do to kind of trigger whether you'd be interested in it I mentioned that there's a pro and regular version of retro batch if using the regular version of retro batch you can see the nodes you would have access to if you had the pro version they're clearly marked so Gus isn't trying to trick you but he lets you play with them to see how they would work if you had the pro version and the pro version and the pro version and the pro version and the pro version now retro batch is really well documented and in the documentation I found a listing of which features are available in both pro and regular and what's available just in pro there are 56 nodes available and for both versions and 23 more are available in the pro version I highly recommend going to this link for retro batch is excellent if you write documentation yourself you might actually be interested in checking out the tools Gus uses to create his the docs for retro batch are written in MK docs which is an open source static site generator and the theme is open source and the site on which he hosts the theme is also free you can find all of the links about these documentation tools in his docs at flyingmeat.com all right back to using the tool as you lay out nodes I said that these connection lines automatically appear but sometimes it's a little hard to move a node so that the connection lines between nodes are exactly what you wanted it to do if you require more granular control retro batch allows you to draw the connection lines by hand in preferences there's a checkbox on the general tab to allow manual connections with a control drag I found that toggling this on and off while I was working allowed me to get the connection lines exactly where I needed them now when I was working on retro batch on my laptop I had to shrink the window width to fit my smaller screen my workflow was partially hidden under the right sidebar section and I couldn't scroll to see it I was going to mention here that it was a problem you might run into but first I wrote to Gus about it he responded immediately and he said I done covered a bug but before I could even respond to his email he sent me a second email telling me he fixed it and sent me an updated version of my laptop this guy is amazing I mentioned twice already how well documented retro batch is I was searching for the right terminology to describe the inspector palette and I discovered that just about anything you want to do has multiple ways to do it for example I said earlier that to add a read node I use the left sidebar and I dragged it in and then I dragged images into the inspector palette turns out you can just drag an image or a selection of images or even a folder containing images right onto the canvas and retro batch will automatically create that read node and populate the inspector palette it's a way easier way to do it if that's not the way you want to do it you could also use edit add node if you like the sidebar you can even double click a node to add it to the canvas by the way if you want to duplicate a node you can hold down the option key while dragging on a node I use that last trick once to actually copy a node from one workflow to another and it worked if you don't like one way of doing things in retro batch there's probably another way now I want to tell you I did run my usual elementary level test of voiceover with retro batch and I was able to pull in an image with a write node add an adjustment and write the file out to a known location I was able to interact with the inspector palette as well I'm sure there are ways that the interface could be improved for voiceover but a fundamental level I didn't run in any showstoppers the fact that Gus gives you multiple ways to add nodes means that when one method didn't work the menu method did work with voiceover if you're a voiceover user I definitely try it yourself before taking my word for it that it will work for you now the bottom line is I know my particular problem to be solved probably isn't a problem any of you have but if you do any kind of image manipulation that you do repeatedly or need to do on a series of images retro batch might help you automate your workflow and apply your changes consistently whether it's blur effects color adjustments sharpening transforming adding color effects manipulating metadata or adding a watermark retro batch can help you with your work I am astonished at how responsive and skilled Gus Mueller is and I'm a huge fan of well documented tools check out the free 14-day trial of retro batch at flyingmeat.com well that's going to wind us up for this week did you know you can email me at alison at podfeed.com anytime you like and I will probably answer if you have a question or a suggestion just send it on over you can follow me on mastodon at podfeed at chaos.social remember everything good starts with podfeed.com if you want to join in the fun of the conversation you can join our Slack community over at podfeed.com slash Slack where you can talk to me and all of the other lovely no-cello cast ways even bark from time to time you can support the show at podfeed.com slash patreon or with a one-time donation like John and Bill did over at podfeed.com slash PayPal and if you want to join in the fun of the live show head on over to podfeed.com slash live on Sunday nights at 5 p.m. pacific time and join the friendly and enthusiastic no-cello castaways thanks for listening and stay subscribed