 Hi, this is Allison Sheridan of the NoCillicast podcast, hosted at podfeed.com, a technology podcast with an ever-so-slight Apple bias. Today is Sunday, December 3rd, 2023, and this is show number 969. Well, we've got a great big show, so let's kick right into it. Muford contributions on the NoCillicast from both Marty Sobo and Michael Babcock. Together, they host the podcast called Unmute Presents, which is a tech podcast with an ever-so-slight blind accessibility slant. They asked me to contribute to their Community Festive Gift Guide, and it was a special episode they did, and I was more than glad to contribute. I listened to that show, and I added a couple of items that they had on the show to my own holiday wish list. So whether you're sighted or not, you should really go check out the link in the show notes to Unmute Presents, our Community Festive Gift Guide, or you can look for Unmute Presents in your podcatcher of choice. I recently published an article telling you of all of the wonderful new features of Bartender 5, one of the most useful apps you can get for your Mac. I was so excited about the new features in Bartender 5 that I called dibs on doing the video tutorial for screencasts online subscribers. That tutorial is now up. In this tutorial, I demonstrate the basics of bartender that haven't changed, like moving more uncommonly used items to the secondary menu bar, but then I get into the fun new stuff. I show you exactly how to style your menu bar in creative ways and how to add triggers to cause your menu bar to change. I teach you how to trigger menu bar changes based on what apps you're running, your battery status, location, time of day, by which network you're attached to, and even by a script. This was one of the easiest tutorials to create that I've ever done for screencasts online because the tool is rock solid and such an important part of my workflow. One of the reasons I love doing these tutorials is because I learn every single little detail of how to use the tool I'm teaching. You can get a free seven day trial of Screencasts Online at ScreencastsOnline.com and see this tutorial and all of the current back catalog of tutorials. Well, I'm back with part eight of my tiny Mac tips. You may remember that this is an ongoing series I started in order to teach Jill from the Northwoods how to move from being an adequate Mac user to a proficient one. In case you missed the earlier installments, I've included links to the first seven installments in this episode, what I'm calling part eight of X. All right, let's start out with the quick actions menu. In the finder, there's a nifty little thing called quick actions menu. You access it by right clicking or control clicking or two finger tapping, whichever way you want to call it, on any file. The quick actions menu is contextual so the options revealed to you will be different depending on the type of file you've selected. So let's go through some of the different ones that you could do. If you choose quick actions on an image file, you'll be able to rotate left, open markup to annotate the image, create a PDF of the image, convert the image or remove the background. Now most of those are obvious, but the last two weren't to me. Convert image allows you to change to a JPEG, a PNG or an HEIF. You can also adjust the image size and choose whether or not to preserve the metadata. Now removed background was even more mysterious. I read on several reputable websites that it should do what it says on the tin. Given a photo with a prominent subject and a somewhat continuous background, it should preserve the subject while removing the background and save it out as a transparent PNG. I took a lot of different image types with varying degrees of obvious subject background contrast and not one single image I tested did anything at all. No transparent PNG, but also no error and no message. I even tried to portrait mode photo and I had no joy. When Sandy did her proofreading that she does on all of my blog posts, she said, I don't know what your problem is. It worked for me and she sent a couple of photos and showed how well it worked. So I tried it on a different Mac with the exact same photo and it worked on the first try. So I'm not quite sure why I had trouble on one Mac but I thought I'd mention it. But if you need to remove background, it's right there in this contextual menu called Quick Actions. Now, if you have an audio file that's not an MP3, the Quick Actions menu will offer to let you trim the file without even opening Quick Time. It launches a little floating window with the trim bars on either side of the waveform. What I can't explain is why it doesn't work on MP3 files. I was able to trim an M4A, a wave, and an AIFF but the Quick Look Actions, I should say the Quick Actions menu only said customize when right clicking on an MP3. By the way, customize is available in all of these. It takes you to system settings where you can add some types of shortcuts to the Quick Actions menu. Now PDFs have two interesting options with the Quick Actions menu. If you select just one PDF, you can immediately go into markup with the PDF without even opening preview. But if you select two or more PDFs and then open Quick Actions menu, you get a completely different option. You'll see Create PDF, which means in that one click you'll be able to combine all of the PDFs you selected into one single PDF. I think that's a pretty cool trick. Now, have you ever taken a movie where you're looking straight down on the subject and it gets saved in the wrong orientation because the internal gyroscope of the phone doesn't know where it's up is? Well, with the Quick Actions menu, you can not only trim video files just like you came with audio files, but you can also rotate them to the left just like you can do with image files. Now, if you'd like to always have access to the Quick Actions without even having to open the menu with a right click on your file, there's a way to see the actions available to you right in the Finder window. With the Finder window open, go to the View menu in the menu bar and choose Show View Options or Use Command J. In that menu, make sure the box is checked that says Show Preview Column. In addition to showing you a larger preview of your file, underneath that you can see and select the options I've just described from the Quick Actions menu. You can read more about the Quick Actions menu at support.apple.com and I've got a link directly to that in the show notes. All right, let's take a new tiny tip on. In macOS settings, we have a Display section. Now this allows you to change the resolution of your displays, just like it says. The normal view for displays shows your display or displays across the top and below that you have four icons illustrating what your display will change, how it'll change depending on the resolution. Now for nerd, it's almost insulting how cartoony this is. On the left it says larger text and on the right it says more space and in case that's not obvious enough, the text inside these four icons goes from large to small. Now if you'd like to work with real resolutions instead of cartoons, go to the bottom of this same window and click the Advanced tab. The overlay that comes up has three fun toggles that control how your Mac reacts if there's an iPad nearby or even another Mac. You know, things like letting your cursor slide back and forth between the devices. And don't be distracted by that fancy new stuff in this menu, look at the top of this menu and you'll see a toggle that says show resolutions as list. Now you'll be taken back to the normal display screen and instead of seeing those four cartoons you'll see six resolutions or at least that's what my MacBook Pro shows. I'm not sure everybody gets six but I see six in the list. Not only that, you'll see a toggle to show all resolutions. When I do that with my 14 inch MacBook Pro I get 22 resolutions from which to choose. Now people have been paying for third party apps to have access to more screen resolutions for years and now in Mac OS it's built right in. I know some of those apps allow you to get even more resolutions but this is a lot to choose from. Now I know I mocked Apple for giving us cartoons by default but I do think that cartoons are probably a lot more helpful to normal people than this giant list of options. All right, our next tip is we're gonna talk about why do double dashes turn into one long M dash? Bart and I record our programming by stealth podcast together which you should totally listen to, it's awesome. We're both reading along with this tutorial show notes and we're both able to edit them at the same time. If you're curious how we do that, we use Git which is a version control system mostly for programmers but you can use it just for text files like Bart and I do for programming by stealth. Anyhoo, one of my jobs while Bart is teaching me is to be proof reading the notes. Recently he was explaining a terminal command and it required a flag with a double dash in front of it. What he meant to write was this is where the dash dash raw dash output or dash R flag comes into play but instead of the double dash on raw output it was a single long dash which is also known as an M dash. Now I usually don't mention typos as I fix them during the recordings but I wanted to make sure I understood what this was supposed to look like so I pointed it out while we were recording and I said I would change it to the double dash. Now I've seen this problem many times before and you probably have two where you're trying to type two dashes and Mac OS changes them into an M dash. I explained to Bart that I know how to fix it you just type them really, really slowly. You go dash dash and then it lets you keep two of them. Well Bart taught me a tip and now I'm gonna pass it along to you and it is a tiny tip. It turns out there's a system setting that controls this behavior and he said he'd just not gotten around to changing the settings on his new Mac but it's one of the things he always does on his Macs and that's why it slipped through on this one version of the show notes was because he just hadn't done this setting yet. If you open system settings, keyboard then under text input you'll see input sources. Then you'll see your language in my case it says US for US English. Finally you'll see an edit button. On the overlay that pops up it will show all input sources on the left with a big list of automatic features that you may or may not like about Mac OS. By default Mac OS correct spelling automatically it capitalizes words automatically it shows inline predictive text it adds periods if you do a double space and finally there's one that says use smart quotes and dashes. Now it's interesting that they combine smart quotes and dashes into one toggle switch. You can't do these separately. Both of these happen to bother programmers so I'm thinking maybe that's why they're stuck together. In software development smart or curly quotes instead of the tiny vertical tip quotes can really mess up your code. And of course if you're trying to document a command for the terminal you often use flags that are called with a double dash and you simply cannot type that in a normal text editor if smart quotes and dashes are enabled. Now I know this is supposed to be just tiny Mac tips but if you'd also like to disable this feature in iPad OS it's in a slightly different spot and with a different name. I don't know why it's almost like they don't think the same people might use the same platforms. Anyway you'll find the option in settings, general keyboard but the toggle is called smart punctuation. Now on both platforms if you really do wanna type a proper M dash you could do it by simply holding down shift option and tapping the minus or dash key on your keyboard. All right next tip. Have you ever accidentally opened a whole slew of windows at once? I did this just the other day and I was planning on doing this tip but it just happened and so I have a great example. I had selected all of the files in a folder and I wanted to open the get info window on just one of them. Without realizing they were all selected I suddenly had 61 info windows open entirely filling my screen. Luckily I knew how to close them all in a single click. I held down our old friend the option key and click the red close button and it's amazing how quickly they all closed. This tiny tip extends far beyond the finder. It works pretty much in every app in macOS that obeys the user interface guidelines. So if you need to close a bunch of I suppose word documents would do that as well you can hold down option, hit the red dot on one of them and they should all close. I'm gonna wind up this tiny Mac tips article by telling you about the best keyboard shortcuts post I've ever seen and it's not mine. There's a ton of these out there but Daniel Alm who's the developer for the timing app he starts out slow and he builds up the concept step by step and then even explains the funny symbols for things like control and option. I had no idea the option key symbol was taken from railroad switches. Get it? It's like an option. Now I know which one's which. Anyway, if you're just learning the Mac having a good guide to understanding keyboard shortcuts will be very handy. If you're a keyboard shortcut junkie already you might wanna keep this one in your back pocket to show your friends. Last week George from Tulsa gave us a great explanation of how he solved his problem of converting gigabytes of image only PDFs to be searchable by using open source, free optical character recognition or OCR software. He explained that he's a Linux user with Linux Mint Cinnamon so his explanation had an ever so slight Linux bias. Now George's goal was to be able to search his PDFs but in applying OCR to his image files he gained something else. A searchable PDF is an accessible PDF. If we can search a file that means the text is there for screen readers like voiceover to be able to read it. That is a huge deal and you should consider doing that with your PDFs especially in the business environment. While George gave us the steps to install and use the open source tools to OCR files on Linux he also wrote, if you're geeky and love playing with computers you might be able to get Tesseract and OCR My PDF to run on a Mac using Mac ports or Homebrew. One of the things I really enjoy about using a Mac is that we have a flavor of Unix under the hood which means we get to take advantage of many of the cool open source tools Linux people get to play with. Our Windows brothers and sisters get to play too because of the Windows subsystem for Linux. Now you know I had to try to see if I could get the same tools working on my Mac. I mean George had thrown the gauntlet down there, right? I was hoping it would be super complicated and I'd have to do a whole bunch of work and it would give me fodder for a long drawn up blog post. Sadly, it was very easy using the tips from George to convert unsearchable inaccessible PDFs into glorious searchable accessible PDFs. But don't worry, even though replicating what George did was super easy I decided to take it up a notch so this will be a nice meaty story. Now George's instructions came at the right time for me. I recently downloaded a user manual for the automated pet feeder I told you about a few weeks back maybe a few months ago and I needed to be able to search it but the darn thing hadn't been OCR'd. I had a problem to be solved. Now you may recall George described two different open source tools he downloaded to do the OCR dance. Tesseract and OCR My PDF. If your document is an image then all you need is Tesseract but if you wanna OCR a document in PDF format you're gonna need to use both. Now I do wanna make a warning here. I am gonna get nerdy. I'm gonna get really nerdy and I think it's good and I think it's fun and I really had a good time but I am gonna warn you here we have a little bit lighter content coming up later on in the show so if you wanna take a little nap during this part that's okay but I had a lot of fun and that's what makes me enjoy doing the show is when I'm having fun so I hope that's okay. All right, to do this exercise yourself you're gonna need to do one thing that sounds super complicated but it's actually quite easy. You need to install something called Homebrew and you have to do it from the command line in the terminal. We've actually walked through this before on the Nocella Cast and so maybe one of these times you're gonna get excited and go ahead and do this. It's really easy but I'm gonna explain it again. I want you to think about Homebrew as being like the App Store except it's on the command line so we're gonna install Homebrew and then a very simple command will let you install any app that's available inside Homebrew. First you're gonna go to the Homebrew website at brew.sh See it's even a short URL it's really easy. On that page you're gonna see a long cloppy terminal command it's got all kinds of stuff in it it's got curl and incomprehensible words it's wonderful. To the right of that you're gonna see a copy command. I want you to click it to copy it. Now you open the terminal application which is buried in your applications folder inside the utilities folder. Maybe I said that backwards. Utilities is inside applications and terminals inside utilities. All right, you've copied the command just paste it into your terminal and then hit enter. That's literally all there is to installing Homebrew copy paste enter. I should warn you you're gonna see lots of lots of more unintelligible stuff fly by on your screen but just don't worry you're pretty little head about it. Now once that's done installing Tesseract and OCR My PDF is just as easy as installing Homebrew itself. To install an app with Homebrew you simply type brew space install and the name of your app in the terminal. So to install Tesseract we'll use all lowercase and type brew install Tesseract. Then for OCR My PDF again in all lowercase we're gonna type brew install OCR My PDF. Now, if you thought you saw a lot of fly by when you installed Homebrew why don't you see how much goes by when you install OCR My PDF. What you're mostly seeing is what's called dependencies. These are the other applications called libraries in this kind of context. Anyway, these are other applications on which OCR My PDF relies. I think it's possible that one of the dependencies installed with OCR My PDF is actually Tesseract but I installed Tesseract first to test so I'm not really sure whether you need to do it ahead of time or just this one time. All right, to recap though we've typed in three terminal commands and we are 100% ready to OCR our image and PDF files for free. George's problem was that he had scanned documents that were saved as image files so they weren't even PDFs yet. To convert his image files to searchable and accessible PDFs he used Tesseract but he said the command can't be invoked directly natively so he had to invoke it by using OCR My PDF. However, I found that on Mac OS I could use Tesseract natively on an image file. I took a screenshot, I saved it with the name og.png for original. I then ran the very simple command that George had given us Tesseract space og.png space new. This created a file called new.txt with all of the text of the image file. Now that's not exactly what I was trying to do but it was interesting that by default Tesseract could create text files for us. I should mention that a lovely gentleman named Frank made a comment on George's post last week that you could do exactly that that you could create text files directly using Tesseract the way I just described. All right, but I wanna make the output a PDF. All we have to do is slap the word PDF in all caps on the end of that command. So to summarize, we can tell the command Tesseract to take og.png as the input file, new as the output file, and then PDF as the format. We don't have to put the file extension on the file called new because it'll be added automatically. So we write Tesseract, og.png, new PDF. And that's all there is to it. We now have a fully searchable and accessible PDF called new.pdf. That wasn't that hard, right? I know I said it was super nerdy but that was really pretty easy. We downloaded three things by just putting in commands in the terminal and we ran a terminal command. That's it and we've got it and we've done it for free. While OCRing an image file was fun, I more often run across unsearchable PDFs. I mentioned earlier that I have a user manual for the cap feeder from Pet Libro that's not searchable. It came up because I had an issue with the cap feeder and support told me to reset it. Well, I had to scan the entire manual with my eyeballs to find out where they described the reset process. But come on, who has that kind of time? I wanted to do command F, look for reset and find it right away. I really needed this manual to be searchable. Well, it's time to use George's recommendation of OCR my PDF to, well, OCR our PDFs. He gave us his simple command, which is only slightly more complex than when we just made up for Tesseract. He invokes the OCR my PDF command and he gives it the flag dash dash output type, which I can now spell because of that setting I just changed. And then he gives the file type we want, which is PDF, this time weirdly in lowercase. Then he gave it his input file named one dot PDF in his output file, two dot PDF. So the whole command is OCR my PDF dash dash output dash type, PDF, one dot PDF, two dot PDF. I tested this command with my cap feeder manual and it took 40 seconds to scan an OCR, the 22 page PDF. It's a successfully OCR my PDF, but I was kind of surprised to see scads of error messages in the terminal as it ran. Every error message was identical, complaining that some image object had no attribute. I put an example of one in the show notes, but I am definitely not going to read it to you. As all of the errors seemed to be associated with images in the PDF, I wasn't terribly concerned. I opened the PDF and it looked exactly as it did before I ran the OCR process, except it was searchable and the text was selectable just as I wanted. Now, in this file, there were images in the original PDF and in some cases there were numbers with little dotted lines pointing to parts on the cap feeder and I'm thinking perhaps OCR my PDF was annoyed by those. In any case, if you get errors on embedded images or drawings inside your PDF, you really don't need to be surprised and I don't think it causes a problem. Okay, you can stop here if you want to, but I didn't. I had installed two app libraries using Homebrew. I was able to replicate George's success on Linux and converting both image and non-searchable PDFs into searchable and accessible PDFs. I was even able to turn them into plain text files if so desired. The whole process of learning, figuring this all out and installing everything took me maybe 20 minutes if I round up, but what fun would it be if I stopped right there? In George's article, he explained that he created a folder in which he drops the file he wants to OCR. He changes the name to one dot PDF so he can use it and run his hard-coded command, which saves the output as two dot PDF. And I'm assuming he's gonna need to change the name of that file back to something he really wants it to be and put it back where it belongs. In that original folder that I was just talking about that he created, he also keeps a text file with his hard-coded command so he doesn't have to remember it. He can open it up, copy it, paste it into terminal and it runs. While this works well enough and it's certainly repeatable, I wanted to try to automate the process. I didn't want to always have to use the same folder or name the file one dot PDF. I wanted the freedom to have this work anywhere on my Mac with files of any name. Often I spend hours automating something that takes very little time for me to do but I do it often enough that getting it automated is worth the trouble. This is not one of those times. I hardly ever need to OCR files. Seriously, it comes up once in a blue moon. And yet for some reason this idea just tickled me the idea of automating it. It was a challenge and it sounded like fun. In programming by Stealth, Bart has been teaching us about automating things on the command line so this gave me a perfect opportunity to practice some of our new skills. For those of us amongst us who are not programmers but have managed to get this far by installing two app libraries on the command line the next step isn't that big of a leap. Whatever you can type in as a command on the terminal you can put into a shell script which is like a little automating program and then you can run it all in one go. We already know how to run the commands to OCR our files. So why not slap them together into a shell script to make our lives easier? Since Bart taught us how to write bash scripts in programming by Stealth, installments 143 to 154 I decided to make my script in bash. My goals in the automation of Georgia's process were as follows. Allow the script to run on any file any PDF file in any folder. Allow the PDF to have any name we like have the script export the OCR version of the PDF into the same folder as the original but I wanted to tack dash OCR onto the end of the file name. This way I'd be able to tell the two files apart and I wouldn't risk overwriting the original file. If I succeeded at these goals I wanted it to run inside Keyboard Maestro but that was a stretch goal. So our script is gonna run on any file in any folder or directory. In order to build the name of the output file we're gonna need to extract the directory path from the input file and save that out as a variable. We're gonna strip off the PDF, the dot PDF at the end of the input file and then we're gonna build the output file name by adding together that directory path to the original file, the input file name and then we're gonna add dash OCR dot PDF to the end of the name of the output file. Still with me? You got that? It's not too hard but we're gonna get there in little steps. Scripting languages like bash and AppleScript take the first input to a command and they give that variable name $1. So variable names always have dollars in them once you've assigned them. Before you assign them they don't, it's kind of weird. I'm gonna call my script file OCR PDF dot SH. So the way you run a script file is you put dot slash in front of the name. What I wanna be able to do is write dot slash OCR PDF dot SH my file dot PDF. So my file's just gonna be a placeholder. It's gonna be called whatever we want it to be. Now when we run that command, my file dot PDF will automatically be assigned the variable name $1 in our script. But we don't wanna use that name because it can get reassigned. So let's create our own variable name. That's gonna be the first command in our script. I'm gonna call it input name. So it's gonna say input name equals $1. So I'm taking $1 and shoving it into input name. Now, dollar input name will be the full path to the file name. For example, if my file is on the desktop, dollar input name would be slash user slash Allison slash desktop slash my file dot PDF. Okay, cool. When we tell the script to write the output file, we're gonna need to tell the script where to write that file. Which you've already decided is it's gonna be right back into the same directory as the input file. We can extract the directory name from dollar input name. So we have it ready for the output file. Luckily, there's a built-in command in bash called dirname that'll grab that directory name for us. I'll create it a variable imaginatively also called dirname. So I say dirname equals dollar, dirname, dollar input name. There's a bunch of parentheses around it, but you can read it in the show notes. I don't need to say all these things exactly right. But basically we're gonna say dirname and on input name and that'll give us the directory name that we want and shove it into that variable. Now this is swell that we have this full path name from our input file. For our next trick, we need to extract the file name for the input file without its extension. If we can do that, then we can use the directory, the original input file name plus dash ocr.pdf to be the name of the output file. To get the input file name without that directory path and without the .pdf, we can use another nifty little built-in command called base name. So I'm gonna say input base name is what I'm gonna call my final input base name without all the other stuff on it. Input base name equals dollar, base name, dollar, input name, .pdf. So now we've got that input base name that's just by itself and we've got the directory name by itself. Now, ideally since ocr and my PDF can, ocr image files too, I should write this generically, so it could be a PNG, a JPEG, or even a TIFF file, but I'm gonna leave that for another day. This is complex enough. The last thing we need to do is build the output file name is to slap dash ocr.pdf on the end of it. I decided to create a variable called dollar add for the additional text. So add equals quote dash ocr.pdf unquote. We now have all of the building blocks to create the output file name. We have dollar dir name is the original directory path where we're gonna write the output file. Dollar input base name is just the name of that file without the path or file extension. And dollar add is the dash ocr.pdf that we're gonna pop on the end so we don't overwrite the original file and so we can tell which file has been ocr. To build the output file name, we need to concatenate all of this together. Well, concatenation is a fancy word for adding it all into one long string of text. You can do it, you can actually look it up in Excel. That's kind of a fun command to go play with. In bash, you put the variable names inside squirrely brackets with the dollar on the outside and then any plain text just gets thrown in there without any of these brackets. We want the directory name followed by a slash, then the input base name without the path or file extension and then we want our added text dash ocr.pdf on the end. I am definitely not gonna read this command because it's getting real gloppy now. But again, this is all documented in the show notes. We're now ready to add the last and most important bit of our script. We need to actually in the script tell it to run the ocr.mypdf command. We'll run it essentially like George did it originally but instead of using one.pdf and two.pdf as our final names, we're gonna use our fancy new variables dollar input name and dollar output name instead. So the final command in this long script is ocr.mypdf dash dash output dash type pdf dollar input name, dollar output name. Ta-da! I put the entire text of the completed script in the show notes and I'm definitely not gonna read that to you. All right, I sent this script off to George to run on Linux without any instructions and that succeeded just about as well as you would have expected. He didn't know how to use it. So let's step through the instructions on how to use the script. First you can install homebrew as I explained earlier. You can install ocr.mypdf as I explained earlier. You're gonna create the script by copying the text in this article and pasting it into a text file called ocrpdf.sh. You can call it whatever you want but if you do that, it'll be easier to follow. In the terminal, you're gonna go to the directory where you save the script and change the permissions on the file so that it's executable by entering chmod plus x ocrpdf.sh. I know that doesn't sound like it makes any sense but it's changing the permissions so that it's allowed to be executed. In the terminal, we run scripts by typing dot slash before the script name. I mentioned that a little bit ago. This script requires an input file so we need to run the script and tell it which file is the input file. If your script is in the same directory as the PDF you wanna ocr and if for example the original file is called myfile.pdf you would type dot slash ocrpdf.sh myfile.pdf. It's all you have to type and it should create a file called myfile-ocr.pdf in the same directory that's fully searchable and accessible. Now if the file isn't in the same directory as the script enter the full path name for the file. Now that can be hard to type sometimes. So if you're on a Mac, you can just drag the file that you wanna do the ocr on into the terminal after you write dot slash ocrpdf.sh and it'll automatically put the full path into the terminal including the file name. I don't know if that works on Linux but it is a godsend cause I can never remember how to put the path names incorrectly. Now let me tell you, this is my first time teaching other people how to write terminal commands and how to create a shell script. So I think it's highly likely that I've left out some steps or I've made a boo boo or two in the instructions. Please go gently on me but do correct me or ask me questions if this doesn't work for you. Now I'm gonna tell you a little secret. Then only the people who stayed awake this long are gonna get to hear it. I did not write this all in one fell swoop. I made lots of mistakes and I didn't look a lot of stuff up. I mean it's a very short script but it took me quite a long time to write it. Perhaps that's not a surprise to anyone that I made this many mistakes and I had to look a lot of stuff up. But how I looked them up might be a surprise. In the past, I've gone to the Googles, put in a search term, then scroll through the results looking for answers from Stack Overflow. That's a site where programmers ask and answer questions on coding. Sometimes I'd get lucky and I'd find the answer on Stack Overflow but often I'd have to search over and over to get the answer I needed. This time around, I asked chat GPT the questions instead. I use Microsoft Edge as my Chromium browser rather than Google Chrome and Edge has chat GPT built right into the Bing search engine. The advantage of using chat GPT or mini-fold. First of all, you get several summary level answers. Each answer has a footnote and it tells you the source. I can see on one question I got seven answers and the first couple were from superuser.com and stackoverflow.com. I can actually click on the link to the answer I'm interested in and then I can read the question and answer in full context. I can see how many people upvoted that. Was that a good answer? Having it summarized and having quick access to the source to me is much better than a giant list of results from Google. A chat GPT remembers what you're talking about too. In a few instances, I'd ask it a question and then I need to refine it by saying, I'm on a Mac, I forgot to tell you. I didn't have to repeat the question. All I had to write was no answer for Mac OS and it would say, oh, I'm sorry I gave you Linux instructions. Here's the answer in Mac OS. Now, while the answers are wrong as often as people are wrong when the answer on the native websites, I found it very quick to work my way through the answers that weren't exactly what I was looking for. I was also able to command tab to work away on something else during the 15 to 30 seconds that took chat GPT to craft the answers to my questions. Now, I didn't rely wholly on chat GPT inside Bing to do my work but rather I had to help me build up each piece. I enjoyed having what Microsoft likes to call co-pilot by my side. Now, the bottom line is I really enjoyed figuring out how to write a shell script on my Mac to automate the process of OCRing PDFs. Perhaps it's a bit nerdy for you but it makes me really feel powerful to be able to do this. I remember a day when I used to want to automate things because all of the cool kids were doing it but I couldn't figure out what to automate and even if I did think of something I didn't have the technical chops to pull it off. I think that's where programming is all about for me. Now I have an itch I want to scratch and I know if I try to use the tools Bart and the other no-silicast ways have taught me I'll be smarter when I'm done. One more thing, as I was writing this up I kept thinking I'm not worthy to teach this stuff and I bet there's a better more elegant way to solve this problem. I wrote it up anyway because Bart constantly says in programming by stealth that there are often many right ways to do something even if his solution might be more elegant than mine it doesn't make mine less. This lesson that he keeps teaching me is probably my favorite thing about learning from Bart. Now believe it or not there is a part two to this article. After I got my little shell script running I decided I could figure out how to put it into keyboard maestro so I don't even need to launch the terminal to run it. The solution is super cool and it was really fun so stay tuned for part two. Now I know the holidays are fully upon us and you probably have a lot of financial demands on you but if you'd like to help fill the stocking of a lovely podcaster who entertains and educates you without fail every single week please consider becoming a patron of the Podfeat podcast. You can do that by going to podfeed.com slash Patreon where you can choose a dollar amount that works for you and your family. It truly shows me the appreciation for the content we provide here. Today I'd like to welcome to the show a gentleman you've heard from before but this time we're gonna have a little conversation. Welcome officially Tom Maddock to the NoCillicast. Hello, thank you for being, let me be here, great. Well, Tom and I have been engaged with a few other folks including John Gruber on Mastodon about the subject of adding alt text which is also called alt tags to images when you post them to social media. Since Tom is blind we thought it might be helpful for us to learn from him what that alt text is, why it's important, who it helps and hopefully he'll give us a few tips on how to create good alt text and maybe even some tools to help us do that. So with that grand introduction Tom what do you first tell us a little bit about yourself? Well, let's say I'm 50 years old. I graduated from Perkins School for the Blind back in 1993 when the college graduated and I'm not working at Walmart been there since 02 and married my best friend back in 2015. Oh, nice. Now, you're blind as we said up front but you've got a couple of other interesting things in your background about from your earlier life. Yes, when I was four years old I was going to visit my grandparents. My mother was giving me to our third child and I got sick down there. I got some kind of influenza and couldn't breathe about eight or nine minutes and I was in a coma for about a year. A year? Yes, the whole year I was in a coma. And I wasn't supposed to be here but I recovered and I graduated high school and I graduated college and all these other things. Holy cow. So you came by everything easily? Not really, but yeah. Now you were telling me that you have cerebral palsy as well. Is that a result of that? Yes, cerebral palsy is technically oxygen starvation to the brain. And since I didn't have any oxygen that's how that came about. Technically, I learned in college during a term paper that cerebral palsy is technically before, during, or after birth, up to a year old. When I was four years old it's technically not cerebral palsy but everything else says it is that's how they classify it. Okay. Well, I don't suppose it's all that important at this point to name it, right? Right, right, right. So you have- I just have what I have and I do pretty good I think. Yeah, it sure sounds like it. So you're in a wheelchair and you have some trouble with your dexterity with your hands, is that correct? Yes, yes. Okay. I always feel not worthy when I talk to somebody like you. It's like I'm whining because, you know, like my feet hurt because I walked too far at Disneyland yesterday, you know. What a big deal. Well, I wouldn't want to be there either. It's too many people, too proud of it. There were a lot of people. Yeah. But people in wheelchairs get to go to the front of the line. That's just a little- Yes, I don't like that. I really, that's a pet peeve of mine. Really? I really don't, yeah. Huh, I don't know. I think you should get a few perks, but anyway, we're not here to talk about that. Let's first, I've talked about alt text a fair amount on the show, but I've never done it from scratch. So what is alt text? Can you explain it to us? What alt text is when you have a picture and you tap on a certain part of the screen. I'm not sure how people who are excited do it. I know I can tap on a picture and it says put it in a description. And I can type it into a description of what the picture is. And when I post it to apps like Mona from Macedon, it comes over into the alt text part so anybody with a screen reader can read what I wrote. So they know what the picture is. If they can't see it, what's the point? I always say a picture is worth a thousand words, but if you can't see the picture, what's the point? There's no words. So you already know what the words are. So when you receive a picture for me, when you get to see one of my posts, how is it that you hear my alt text with your screen reader? Yeah, voiceover on the iPhone will read the alt text automatically when the picture's there. It's like part of the picture. Okay. And then when you go to save the picture, if the person does it right like you've been doing lately, it copies it directly to the caption field to the picture. So I don't need to do any extra work. I can just mark it as a favorite and it goes up on my Apple TV for other people to see and enjoy. Oh, wow. Oh, that's interesting. So you can save the photo and it actually saves it in. I've got to try that. I did not know it could do that. Now, we've been describing it in the context of Mona, but it's for Macedon, which is a darling of the blind community. I mean, I don't think I've ever seen it in Apple. People just embraced it because they wrote it very much with accessible in mind. So that's pretty cool. So it allows you to hear it when you do it. And that's important to you because otherwise it's just a meaningless post. Yes. I mean, it depends. I mean, the person can write something around it, but you don't know what the picture is, you know? You can say, oh, here's a nice picture. We were at Disney World last night or Disneyland last night. Here's a picture from our, but if it was in the picture, I would have no idea that the picture you did last night was of the Disney castle all lit up in the fireworks and the lights in the background. I had no idea. So if somebody does do, I mean, it is possible to write a post that includes a very specific description, but I feel like it's just contextually different. So in my post, I said, you know, it was really cool to see the Disney castle all lit up with the Christmas lights at night. But in my description, I said they're white dangly lights and the, I think I said the building is blue and it's against a jet black sky and, you know. Yeah. That's different than the way you would write it just to say, here's a post. Right, exactly. Because everybody else seeing your post, they see the picture, so you wouldn't have to give those descriptions for somebody who could see it. Right, right. Now, I feel like it's becoming- And I just learned gesture, I'm sorry. Go ahead. I just learned gesture that writes on the picture of the people with the caption in it. And on a family group, they don't see the captions automatically. Right. Now- I did not know that. So that's an interesting thing. And I'm gonna turn this like 90 degrees from where we were supposed to go just because of, I ended up in a conversation with somebody, part of this thread that we had with, John Gerber had asked a question about alt text. And so that's how I get involved in this conversation. But this one very angry blind person started writing back and forth to me. And it was very interesting. They said that they purposely put content into the alt text because that way the sighted people who don't bother to read their descriptions will miss something. And it was like, it was such a, just such an angry perspective. It really surprised me, but it came in the context. If I was saying, well, I don't actually normally read the alt text. And they said, well, that's what really makes me mad. And it's not revealed to us automatically. We just don't see it. And this person said, yeah, that post. Well, and then, but this person came back and said, yeah, that's just those terrible app developers not revealing the alt text to everybody. And well, except for Mona, because Mona, if you send a picture to Mona without captions, it will say, hey, do you want to add a caption? Do you want to add alt text to it? I don't think it does that for me. Yes, there's a setting in Mona. Oh, you can set it to do that. There's a setting that's automatically on. I just saw recently that says, remind to send with alt text if you're sending a picture. Oh, that's cool. I'm going to turn that on. So, but this person was mad that the sighted people weren't seeing it when they see a post. They were angry with the developers for that. But to me, if I can see this picture of Disneyland with the white lights and the black sky behind it, it's redundant information. And actually, it would be kind of annoying to also see the alt text. Exactly. And I don't know how you would see that. Would it be like a long side of it? Well, what's nice about Bastodon is if you can actually, in Mona, I think it actually says alt on the picture. So, if you want to see it, you can see it. Oh, the voiceover reads it automatically. So, I don't know that there's a separate field. Yeah. So, like Steve posted a picture of a giant Santa and I can see in the bottom left-hand corner in Mona. I can see a little, like a little text box. And if I click that, I can see it says, photo of me and my dog, Tessa, in front of a blow-up Santa lawn decoration that stands about 25 feet tall. We appear small. So that, oh, cool. I'm gonna get that picture. I'll tag you in it. You know, I fell asleep. She was, well, I'll get that off of Mona. The master on it, is it? Yeah. Yeah, it was just a dare so go. But that, I mean, we have the access to it, but it's not in our face. And this person kind of was annoyed that we weren't being forced to look at it all the time, which I thought was just. Well, why would you though? I mean, if you could see a picture, why would you need the extra thousand words when you can see the picture? Exactly. Now, one thing that strikes me- But you do have the option to see it. So that's a good, at least. Yeah, yeah, I do like that. And that's definitely not true in all apps. So you started talking about some tools to allow us to add these captions more easily. What do you do? Or what's available to us that I don't know about? Well, there's one called Be My Eyes. It's- That's an iPhone app? Yes, it's an iPhone app. And what it does, up until recently, you could call in to get help with a picture. You'd say, hey, what's in this picture? And either AI or a person could help you with different things. But they really bumped up the AI recently, so where you can look at a picture and it will analyze the stuff in the background I got all of this entity later, I shouldn't send it to you, but more incentive, you're a picture of Disneyland. When I put, I was just curious what it was gonna say. And it puts in so many more things in the background. And it's an amazing way to recognize it. There was a post I did, I saw more than a few weeks ago with, from Doctor Who, do you know that show, Doctor Who? Yeah, yeah. Well, there was a picture of one of the creatures. And I'm like, I wonder if this is gonna recognize it. So I put it through Be My Eyes and it knew what the creature was called. Oh, wow. So it has, you know, and it's just great. And it just adds a more rich, where, you know, description to whatever a person could ever think to write. So I'm a little confused. I'm a little confused. So Be My Eyes is an app that runs on the iPhone. And you're, so you're using it in order to find out what's in an image. I thought you were gonna be telling me how I can put better image or better tools. Okay, you're talking about the other way around, got it. Yeah, so what you do is you can, instead of sharing it, first you share the picture to Be My Eyes. And then Be My Eyes will look at the picture and it'll go, this is the description of the picture. And it will pick out, you know, the background and what the sky looks like and what the castle looked like. And then there are lights bursting over it. And it just goes really, really, really intense. This might give sighted people the excuse of we don't need to do captions if you have these tools. What do we need to bother for? No, because what you wanna do is save us the work of having to do that. If you do it first, if you would, like say you took your picture, you ran it through Be My Eyes and then you would put that text in your old text along with whatever you wrote. We wouldn't have to do that, but save us the step, in my opinion. It's just, you know, it's just better, it's just better descriptions. It seems to me that if the person describing the photo does take a little bit of effort to write a sentence or two, it kinda, you can pick out what's important. Why was this interesting? Like what Steve did with his photo of Santa and he and our dog, Tessa, we appear small. That might not be something that AI would pick out, but that was the point of the photo is for you to realize that this Santa was ridiculously large. Right, but that's why I like to put both. I usually put what the person does and then if I want more description then I run it through Be My Eyes. So how do you use your iPhone to look with Be My Eyes to look at a photo if the photo's on your iPhone? So I take the picture, I save it from, I can open it on Mona, let's say, and then you tap on it, you go to share and then you can send it to Be My Eyes to analyze it more. And then I can see the original alt text that whatever you wrote before I do that. Now, okay, that's good, but I want a little bit more. And the other thing is, if you want to know, what color is Tesla, your dog Tesla, right? You're not gonna say that because you're filming people are gonna see that, right? Right, or is that important to the story? I know, but what if I want to know? Sure. Because you're gonna look at it, you go, oh, that's a brown dog in Tesla. But I'm not gonna know what color the dog is. But finally, I can say to the picture, what color is the dog in the image? And Be My Eyes will come back and go, oh, it's a brown dog. Okay. With red spots or whatever. If you want more information. And then I can choose to add that to my copy of the captions or not. I can figure out what's important for me to save for myself. Okay, okay, I got you, I got you. So as far as from the sighted person's perspective to create captions that are good or create alt text that's good, we're gonna use these words all interchangeably because they pretty much are. What are you looking for in an alt text? I mean, what makes a good alt text? My first thought would be, look at the picture. What do you, what's the first thing to jump out to you? Is it the background? Is it the sky? Is it the castle that's all lit up? You know, is there, you know, what are you seeing yourself when you first look at the picture? What's your first gut reaction? You go, oh, that's cool. The lights over the castle are cool. And maybe the location because you don't, I can see a castle, you know. But I'm like, what war is this? Is that Ireland? Is that, where far it is? Is that Disney World? Where is that castle? So it's important to put where the place is. Because I'm not coming from the ground. I've never been to Disneyland, I've never seen it. So I wouldn't know what it is. So that's important to put what it is and where it is. And then anything that makes it stand out, maybe. Okay, okay. So if you were, maybe just imagine you're trying to describe the photo to somebody over the phone. Yes, that's a good, that's a good thought, yes. Like why was, why was this interesting? I saw this really cool photo last night of a, I was looking on threads while I was hanging out in a line at Disneyland and it looked like a moon shape over the water. But there was this bridge and it was actually a reflection of an arch and just the way I'm describing it to you now, but that's what you would put into the alt text. That's perfect, yes. That's exactly what you would put in the alt text. And what do you say to people who say it's too much trouble? Well, I understand it's a little extra work because you just want to get the picture. It's always a cool picture. We only share it to the world, that's fine. But then you're locking, you know, and that's, there are a lot of people in Macedon who say, well, I'm not gonna reboot yourself if you don't put alt text in it. Yeah, the way I look at it, and I did a talk on this a hundred years ago at I think it was blog world expo at the time. I entitled my talk increase your audience size through accessibility. So everybody wants more followers, right? Everybody wants more people to see their posts. They, people say they're still on X Twitter because they have so many followers even though they know that 80% are bots, but they want more followers. That's, people love that. And it's like, we'll want to know a way to approach, I don't know, a couple million extra people. Why don't you throw an alt text on that, you know? Right, an alt text isn't just for Bastadon either. It is on X, it is on Facebook. Any place that you can post a picture, you can add alt text. Yeah, it's in our Slack. And we have a bunch of blind people in our Slack, so don't be forgetting those folks either, podfeed.com, Slack. No, that's true, I forgot about that. Slack isn't something I've taken on yet. I've just recently gotten into Discord. Oh, cool. And it's in Discord too, right? I haven't gotten that far yet. I just started it like a few weeks ago. But I mean, you can put alt text on your images in Discord, I believe, right? I would assume, I haven't tried it. I haven't tried that. I don't know as best as you can. I bet you could, because that's just an image. I mean, like I said, if the image has the caption already in it, and you share it to Discord, I might guess it would go over. Okay, okay, so back up on that. That was something that came up that I didn't realize. If you're on the iPhone and you have a photo up, if you swipe up, you can see the date the photo was taken, what kind of camera, you can see a little map. And that whole thing, there should be a caption field just before all that. Correct, it says add a caption. But I, and so I had never realized this. I occasionally put things in the caption, but if I put a caption on this, in my experience, testing this yesterday, it doesn't stick. When I go away from the photo and I come back, it's gone. But you have to hit done. Once you're done typing it, you have to hit done. Okay, Pat and Steve in front of a white Polestar. We took a picture for Bart because Bart's gonna be buying a Polestar. And we just happened to run across one. So Polestar EV, okay, that's good enough. So I'm gonna tape done, maybe I never hit done. So now you're saying if I share this to, if I share this to Mona, or how about in a text message to you? That's supposed to work, right? I suppose, because I can't get to my text right now, but yeah. Well, we'll leave that for everybody to find out whether this worked. I told Tommy he wasn't allowed to have anything playing back on the recording at the same time. So he's not listening to voiceover or anything like that. Yeah, I just got the text. Okay, don't look at it. Don't, yeah. I won't look at it, I won't look at it, I won't. So that's interesting. It never occurred to me that I could do that there. And now what's really cool if this works, and I assume it does, because you said it does, if this works, it actually does look like it's stuck this time. So if you put the caption on the photo in photos, then you can take that SEM photo, send it to Mastodon, send it to Slack, send it to Discord, send it to Facebook, it would actually be already there. Yes. Interesting. Huh, I gotta test that out. Because that's a big thing is I have to use, I mostly do it on my Mac because I wanna use my clipboard manager to save that text and save the text of my message because it's two separate things. So I'm going back and forth, back and forth, back and forth doing it. But if I could just put it in the photo. On the Mac, can you do the caption field show up there? Let's see. Oh, well, I'm syncing that photo. This is real time excitement, everyone. Let's see what happens. Let's see if it's there. I do not see a caption field. There's a title field. Let me see if I do get info. What have I got? I got a title, I got a location, it's 24 millimeters keyword tells me who's there. Tells me I was in the Mickey and Friends and Pixar of Pal's parking structure and nothing about that caption. Huh. Yeah, I know, I have sent the picture with captions to my family. They say they can't send the captions either. Hmm. But one of that's just a problem with the iPhone sending texts. Well, my impression has been that Apple has two completely different teams working on photos for the Mac and photos for the iPhone and that they occasionally have coffee together and go, oh, that people in places thing, maybe we really, or people in pets thing, maybe we should sync that between them. But like, I've got a title. I can't show that, yeah. I've got a title field on the Mac, which I fill that out all the time so that I can find my photos. But that doesn't pass through over to the iPhone and the iPhone's got captions. And there's no title field, and I don't see no title field on the iPhone. Nope, there isn't one. So that's interesting. I think, I mean, if this caption thing is gonna be somewhere, I think having it on the iPhone's probably a better place because most people take a photo and post it, right? So I think those of us that are immediately going over to a Mac are probably fewer and farther between. Right. So that's pretty cool. Is there anything else that you wanted to tell us? Cause we're kind of coming up on our time here. Anything else we should be thinking about, like, oh, I wanna stick one in and then I'll let you answer my question. Tom and Kevin Jones have both given me positive reinforcement publicly for good captions when I do them. And I gotta tell you, it makes me wanna do it 10 times more. When I go, wow, I made a difference. Somebody was able to get my content that otherwise wouldn't have been able to see what I was trying to talk about. And I'm very responsive to Pat on my little pumpkin head. So keep doing that when I do a good job. Yes. I saw a post on Mastodon last night talking about this and someone said, think of it like this. You see a screenshot of a weather forecast. What would you rather say? Would you rather description that your phone says to you, this is a screenshot of a weather forecast or the temperature of the next few days will be 55 and sunny. Which is the one which would you rather see, you know? One gives you actionable information and one does not. Now, I'm gonna declare one thing I never describe. I will say Discord logo if that's what I'm posting a picture of. Cause it doesn't matter whether it's that it's a D or whatever it, I know it looks like a little controller thing, that doesn't matter, right? Please tell me I don't have to describe logos. Well, but I don't know. But see, that's where, if you put that in a picture, I could run that through be my eyes and then be my eyes could describe it if I needed it to. I don't know what that description, I don't know what that is. Yeah, but it doesn't matter. But if I heard a hundred, but if I saw it once and then I saw it the second, third time I go, okay, I know what that is, I don't need to do that. But if it's the first time, I might want to know what it is. Yeah. But the second, third time, maybe not. But that's where, like I said, which of these kind of tools can come in handy and tell you where it is. Let's try it more. On my blog posts, if I'm using just like the NoCellicast logo, it just says NoCellicast logo, like, Is it a logo? Yeah, for every episode of the NoCellicast that I post the blog post for has the NoCellicast logo on it. And I just wrote NoCellicast logo. You don't know that I think of it. Sorry? I didn't know there's a logo. I can't tell what it is. You know what I should do though, now that I think about it, those logos are all stored in WordPress and I don't have to write them each time. I'm going to go back and fix all of them. So like the PBIA, the programming I stealth logo is very cute. I should describe that one. Security bits is very cute. What you could do is do one of your techy spanners for it. You don't have to type it each time. Yeah, well no, it's already embedded in the photo. I just point to it in WordPress. I don't upload it every time. So I don't, I'm just being lazy the first time. So I could fix that. I'm going to fix that. But when it's generic, it's like I've got a bartender logo on a post I just did. It's just going to say bartender logo to be fair. Right, right. Well, no, I did say it had a little tuxedo person on it. So I guess I did do a better job. All right, I'm conflicted, you can tell. That's fine. You're the host, you can do what you want. Well, and everybody can do what they want. Nobody has to do alt text. But if you want more people to enjoy your content, which is the whole point of posting things publicly, maybe you should consider doing it, right? Yeah. My wife and I were sitting here yesterday doing pictures and I said, I kept asking her, I post this and she said, yes. And then I would dictate it into the field and I said, that sounds good. She goes, yep. I said, okay. And then I'd run it through, be my eyes and it would tell me more. Like man with a beard and woman with a pink dress and I'm like, oh, I didn't even know what she was wearing today. She didn't even ask what she was wearing. And I said, I asked her yesterday. The picture told me when I posted the pictures yesterday. Oh, nice, nice. Well, this has been really helpful. I like your perspective. You're one of those positive people that make me more encouraged and interested in doing things to make my content more accessible because I do want more people to enjoy my content. The angry people- You're very good at that. Well, thank you. The angry people not so much. I'm going in and taking all this software doesn't work for the blind and don't try it. Yeah, it's always at the top of mind. I feel a little bit guilty. I'm going to be talking this week about a battery pack and it's got a big bright display and I talk about how awesome the display is and I never point out, yeah, you're not going to be able to read that if you're blind but Kevin Jones just said, use test be my eyes on it to see if it works. So maybe when we're done recording you can help me learn how to do that. So this has been really- And does it chime when you put it on? Does it make any noise? No, no, nothing. No, no, you haven't got a chance. Okay. But there are battery packs that do and so that's definitely better to go. Yeah, they're going to beep them but vibrate to all kinds of things they do. Yeah, not this one. No. But I do need to cut us off if people want to follow you on Mastodon, what's your Mastodon handle? iGuy7200 at dragonscave.space. Dragonscave.space. Okay, I will make sure there is a link in the show notes to that and thank you so much for coming on the show. This was really fun to actually get to talk to you. Appreciate it, nice meeting you too. Well, I wanted to add one more thing. In my conversation with Tom, you remember he was using be my eyes to add captions to using AI to his photos before posting. As you should never do during live, during a recording, I installed the app on my iPhone and I didn't seem to have the same user interface in be my eyes as Tom. So we got kind of confused in the middle there. Now I'm going to do a tutorial on how to get this to work but for those who are curious right now, I figured out what was wrong. When you first open be my eyes, you tell it whether you'll be requiring assistance or that you're willing to give assistance to someone else over the phone. I told it I would require assistance so they would know I was blind but then I logged in with the account I created back in 2015 in hopes of providing other people with assistance. Sandy fostered to this as well back then but we've never been called, which is very sad. But anyway, logging into this existing account flipped the app away from needing assistance and back to wanting to provide assistance. So I had to delete the app and log into a fresh account that I told it did need assistance and now I was able to see the AI tools that Tom can see. Spoiler, what Tom taught me with be my eyes is nothing short of amazing. The AI descriptions are, they're scary, they're so good but you're gonna have to wait for me to write my tutorial so you can hear more about it. Well, that's gonna wind us up for this week. Did you know you can email me at alison at podfeed.com anytime you like? If you have a question or a suggestion, just send it on over. You can follow me on mastodon at podfeed at chaos.social. Remember, everything good starts with podfeed.com. If you wanna join in the conversation, you can join our Slack at podfeed.com slash Slack where you can talk to me and all of the other lovely no-silicastaways. You can support the show at podfeed.com slash Patreon or with a one-time donation at podfeed.com slash PayPal. And if you wanna join in the fun of the live show, head on over to podfeed.com slash live on Sunday nights at 5 p.m. Pacific time and join the friendly and enthusiastic no-silicastaways. Thanks for listening and stay subscribed.