 There's often a lot of questions as to why the astronauts in the Apollo 11 surface EVA footage looks sort of ghosty. People say that this is clear evidence that it was faked, that it's trying too hard to look otherworldly. Well, it's not. It just comes down to the TV technology of the day. Anyone born before about the year 2000 would remember the big TVs we all had at home growing up, the cathode ray tubes. These are not the streamlined TVs we have now that sit flush against a wall. These TVs had massive thick backs because of the way the mechanics inside the TV worked. It all starts with the camera. The scene being filmed, say two astronauts walking on the moon, is focused through a lens until it hits a photosensitive plate inside the camera. That image on that photosensitive plate is scanned by a beam of electrons. The beam crosses from top to bottom, left to right, covering the full plate. It embeds that image in an electromagnetic wave. Every peak on the wave translates to the brightness of a given point on that plate. It was standard in the 1960s for that beam to encode the information seen on that plate 30 times every second. That's where we get the playback rate of 30 frames a second. So that's the first part. Next, the image that was seen in that video camera has to be transferred into a monitor so someone can actually see it. This is where the cathode ray tube comes in. That monitor has its own electron beam, and that beam is changed as per the information that was encoded from the camera. So the same way that the image seen in the camera left us with an electromagnetic wave with peaks and valleys, the peaks corresponding to lighter points, the beam inside the monitor is changed to mimic the beam that was seen from the camera. That's how the image goes from the camera to the TV screen. Well, most of the way. There's still a little bit more. That beam of electrons is pushed to the front of the device where there's a screen coded in phosphor. Every time an electron hits that phosphorous coating, it appears as a light. Now, that light varies in intensity according to the wave that electromagnetic wave that the information on that electron beam is carrying. A brighter light, a higher peak, translates to a brighter light rather. And all together, that builds out an image in points of light. Because we aren't sitting super close to the TV, at least you're always told not to sit so close to the TV with those old ones, you don't actually see the individual points of light. What you see is the full image. Your brain can take all those pieces of light and put them together to form an image. It's the same way that digital images now are pixelated. We can't see what the image is when we get so close to see every pixel. But when we move back, we can see the full image. It's also like looking at a point-alist painting. The same way the camera looked at the image on that photographic plate going line by line, top to bottom, the image is then displayed on a monitor in the same way, line by line, left to right, top to bottom. And there's even data encoded in that wave to stop the image from broadcasting at the end of a line to make sure that we end up with a clear, crisp picture. But that's actually not all. It gets a little bit more complicated by virtue of the technological limitations when this technology was developed. The image we see on a cathode ray TV screen is the glow of an electron hitting the phosphor on the faceplate. It glows and then fades. So if the electron beam scans the screen top to bottom, left to right, the image at the top would be faded by the time the image on the bottom of the screen is shown. The solution was to break up the frame. A standard frame is 525 lines. Divide that by two and you have two fields of 262 and a half lines. The beam fills in the odd numbered lines first and then returns the top to fill in the even numbered lines. That way, before the image at the top can start fading, there's new information that's bright so that you never see a dip in the image. It's a process called interlacing. Two fields of video are put together to create one frame. And again, our brain can't see this happening. It just sees a clear video image. Now, the challenge with these kinds of systems when you're talking about filming live from the moon is that the whole system relies on complicated tubes. The tube inside the monitor generates the electron beam inside a vacuum vessel. Hot wires provide the electron beam and coiled wires deflect it so it can scan the screen left to right top to bottom. It was a hot, heavy system that drew a lot of power. All things you don't really want to deal with when you're going to and working on the moon. So NASA modified the system and came up with a solution so that it would be less cumbersome for Neil Armstrong and Buzz Aldrin to work with the camera on the moon. The EVA camera that Apollo 11 used had one imaging tube that worked at a scan rate that would be considered slow-scan television, 10 frames a second with 320 lines per frame. And there was no interlacing. The bandwidth was also very low, 0.4 MHz versus the standard 5 MHz that was then used for broadcast television. The Vidcon tube used in this kind of imaging also caused a bit of a lag, adding for a bit of smeariness to the image. So because it was a different frame rate, a different number of lines without any interlacing, this data came back from the moon and could not be read by any TV on the planet. So NASA had to convert the image from the moon into something that could be used for broadcast. And this is where the image gets distorted with that classic, ghosty look that we are all familiar with. Converters were installed at certain ground stations to generate the right kind of signal. First, another Vidcon camera aimed at a TV screen displaying the lunar image. This camera was set up to record the 10 frames per second at a rate of 60 fields per second, but only when there was a full image on the screen. This meant that the converted image had a full image every 10th of a second, so one out of every six fields contained an image. That left five blank, which meant the next step involved filling in the missing five fields. The good frame was recorded onto a magnetic disk and then replayed five times to fill in the missing fields. This yielded the necessary 60 fields per second for the 262.5 lines, which is the same as 30 frames per second with a full frame of 525 lines. It's the repeated image to fill out the necessary frame rate that gives that Apollo 11 footage that weird ghosty feel. From there, the signal is ready to be broadcast around the world thanks to radio dishes, the same way all TV broadcast was done at the time. So what do you guys think? Do you still think the footage looks faked? Tell me why in the comments below. I would love to take this one on yet again. And of course, if you have other questions about anything Apollo related or Old Spaceflight related, leave me those questions and comments in the comment section below as well. As always, be sure to follow me on Twitter and Instagram for daily Vintage Space content and with new videos going up right here every single week unless I have to take a week off in which case I will post something on my community tab. Be sure to subscribe so you never miss an episode.