It’s a familiar scene in countless TV crime shows and science fiction movies: a computer whiz pulls up a photograph or video on their screen, and with a few clicks, the image is enhanced to reveal a key piece of evidence that solves the mystery. But these fictionalized narratives where software is used to sharpen a highly pixelated image are just that—fiction. No amount of improvement in satellite electronics can compensate for the low resolution of a small telescope, and similarly, no post-processing techniques or software can create information that isn’t there to begin with.
Let’s use an example to illustrate the concept. Remember when most of our music collections went from CDs to digitally compressed formats like MP3? This allowed even the first iPods to put “a thousand songs in your pocket.” This worked because the starting point was a “high resolution” digital song in CD format, to which compression was applied to eliminate redundant information. There’s a lot of pretty neat math involved, but the essence of compression is that music is notes played on instruments, and a musical score plus the specification for the instruments can be saved with a lot less information than is needed to record at full CD fidelity.
Now, instead, imagine that the CD were recorded by sampling only a single averaged sound for every few seconds of music. Would the song be even recognizable? Could it somehow be reconstructed given that so many notes had been smashed together into a single tone? Unless you happened to already know what the song was and were able to match the blurry pattern that you heard to this prior knowledge—and, importantly, the blurry pattern wasn’t identical to a hundred or a thousand other song’s blurry patterns—it would just be incomprehensible noise.
Now let’s translate that to the imagery realm. A telescope is another form of recording device; it samples light instead of sound. There’s a fundamental aspect of physics that says that the wider a telescope’s aperture, the finer the detail that it can sample. Details finer than this limit cannot be seen by the telescope and appear as a singular large pixel, rather than many individual pixels. Since these fine details were never individually recorded, there’s no way to bring them back. For the technically inclined, the resolving power of a telescope is referred to as the “Rayleigh criterion,” and it is inversely proportional to the diameter of the telescope. So if telescope A can achieve 2 foot resolution, and telescope B has an aperture twice as big as telescope A, telescope B can achieve 1 foot resolution.
So is there actually such a thing as image enhancement? We see “sharpening” filters in Photoshop and other applications. If they can’t create new information, what are they doing? Fundamentally, these tools rely on an assumption: edges are sharp. So if we see something at lower resolution that looks like an edge, these tools can make it look sharper, or more distinguished from its surroundings, in the “enhanced” image. By making an educated guess, the edges of buildings or lines on a tennis court can be sharpened, but information that isn’t in the original image can’t be created.
For example, if words painted on a runway are legible in an image with 30 cm resolution (left), but not in an image with 1 m resolution (right), enhancement cannot reveal the words in the lower resolution image because that information was lost.
Back to the TV example – what happens when, despite knowing better, we attempt to “enhance that” by applying image enhancement to the 1 m image? Here’s what happens when we sharpen it once:
And once more:
As you can see, an image is just like a book, a painting, or a master audio recording. The information that is captured at its creation is all that will be available a day, a month, or a decade later on. So when you hear someone say that a low-quality satellite image can show you the same thing as a high-quality image, feel free to point out that making it bigger doesn’t make it clearer.