Thursday, December 15, 2022

To see the flower with eyes we do not have

The petal to the bee is luminous

I’m not sure Hollywood will ever get over the “you only use 10% of your brain” trope. Sure, it’s not true, except in the most boring sense (different parts of the brain do different things, so often only about 10% is active at any one time), but there’s something so tantalizing about the idea. If we only use a fraction of our potential, what else is out there? What would the world look like to someone who had access to more; who could break free of our tiny slice of consciousness? What wonders wait outside the circumscribed corner of reality in which the evolved, embodied senses of humanity have left us huddled?

This is how I feel about light. 

If you’ve ever taken a physics class, you’ve probably seen a diagram of the electromagnetic spectrum at some point. All the different flavors of EM radiation, laid out in a neat line: gamma rays, X-rays, UV light, visible light, infrared, radar, microwaves, and radio. (Really? Radio? That one always gets me. For some reason it feels really weird that a purely sound-based medium is encoded in light. I have this false image stuck in my head that radio is beamed through the air via special “encoded sound waves” that get translated back to the audible range by the miracle of technology. Sometimes fiction is stranger than truth.)

Anyway, one thing you may notice is that the “visible light” portion of the spectrum is TINY. It’s a teensy-weensy sliver in the diagram; so small that it has to get its own blown-up subsection just so you can see the whole rainbow laid out. Out of the whole vast expanse of electromagnetism, that’s the only part that our eyes translate into imagery.

That’s not to say that we don’t sense the rest of the spectrum at all! Not so. The longer wavelengths out in the far infrared and beyond are what we feel as heat. This is why microwaves are so effective at cooking food, for example, and also why heat lamps tend to be red (check out which visible color bumps up against the infraRED region). In the other direction, you could say that the human body “senses” UV light by getting sunburned, although that’s kind of like saying you can “sense” someone’s presence when they punch you in the face.

But still, when it comes to using light to its full potential (that is, to see stuff), we’re limited to that minuscule slice in the middle. Why? For a clue, take a look at this diagram.

The shape of the black curve is the “Planck function:” the spectrum of light emitted by an ideal hot object that has the same temperature as the sun does. The actual sun doesn’t line up perfectly with this line, for various reasons, but the overall shape of the curve is the same.

For now, focus on the red region: the amount of light of each wavelength that reaches the surface of the Earth. This subtracts out all the energy that gets absorbed or reflected by gases in the atmosphere (hooray ozone!) which leaves quite a few gouges in the spectrum. Regardless, take a look at that big solid chunk that falls directly within the visible range. Coincidence? I think not!

If I had a “window of detection” that was only 420 nanometers wide, and I had to choose which part of the EM spectrum to point it at, the visible range would be my first choice. Having more energy to absorb is important when you’re taking pictures: more energy reflected off of each square inch of area means you can put together higher-resolution images. If the reflected light is too weak, on the other hand, you have to start making your “pixels” bigger in order to get enough signal strength in each one.

There are presumably other evolutionary pressures on the human eye, too. Think about how much the color of food affects our perception of it. I don’t think it’s a coincidence that basically all candies are brightly colored. The signals from the 380-700nm range must have been pretty strong indicators of what things were good to eat back in the ol’ ancestral environment.

But that’s only true for humans! Think about those viral pictures of “what flowers look like to bees.” The petals seem to glow, brighter and brighter the closer you get to the pollen at the center. The flower to the bee is luminous, far more entrancing even than it looks to us.

Bees’ visual range is offset from ours by about 50 nanometers: they can see from 300-650nm, compared to our 380-700 range. To a bee, then, the color red means nothing, but the world lights up in the ultraviolet range. Nice deal if you’re looking for flowers, not so much if you’re on the hunt for tasty red berries.

All right, so bees can see a few dozen nanometers’ worth of the spectrum that we can’t. Whatever, we keep them locked up in giant apartment complexes and put the fruits of their labor in our chamomile tea. Life is full of tradeoffs. So what? 

In space, a dozen-pupiled eye

On July 23rd, 1972, the first satellite of the Landsat mission launched from Vandenberg Space Launch Complex. The Apollo Moon-bound missions had captured the public imagination with their photographs of the Earth’s surface as seen from space. The government decided it was worth spending the money for a constant stream of imagery taken from orbit; a global record of the world’s terrain, constantly updated. They were crazy back then.

The main sensor on the Landsat-1 satellite was the “return-beam vidicon (RBV),” basically a TV camera pointed down at the surface. This seems to me like an eminently sensible way to do things; if I had been on the design team, I would probably have suggested the same thing. Whenever spy satellites show up in big-budget movies, this is still how they’re portrayed: taking live video of the Earth from above. How else would you do it, anyway?

The RBV wasn’t the only sensor on Landsat-1, though. There was also a weird little device called the “multi-spectral scanner (MSS)” that used an array of tiny sensors combined with an oscillating mirror to sweep its gaze back and forth across whatever patch of ground happened to be right below the satellite at that instant. Combined with the orbital movement of the satellite itself, the result was an overlapping zig-zag of scans running across the surface of the Earth under Landsat-1’s orbit path.

The scientists were quick to count their lucky stars that they had the MSS, because the return-beam vidicon broke pretty much as soon as the satellite reached orbit. (Well, it didn’t break, exactly. It was just that every time they turned it on, the satellite started to spin out of control.) So now the government had a $2 million satellite that could only communicate using this quirky little experimental sensor.

And as it happened, the MSS was not limited to the visible spectrum.

The tree radiant

Put yourself in the shoes of a leaf. Your job is to absorb light from the sun and turn it into sugary chemicals, which the rest of the plant uses to grow, fight off parasites, etc. Not all wavelengths of light are equally good for this purpose. Red light is the best at it, blue light is ok, green light is pretty bad, and infrared light has a tendency to COOK YOU ALIVE.

Because of this, leaves tend to be fairly hostile to IR light. So hostile, in fact, that they reflect almost all of it. In the sense that the “color” of an object is determined by which wavelengths they reflect, active plant matter is overwhelmingly “infrared-colored.” The graph of vegetation reflectance is dominated by the huge mountain in the near-IR range.

We puny humans, of course, see none of this. Our vision is limited to the slice waaaaay off on the left, past the “0.7” mark on the x-axis. That tiny bump around 0.6 (we’re measuring in micrometers here, by the way—off by a factor of 1000 from the nanometers we were using before) is what makes us see leaves as “green.”

Historically, it hasn’t mattered much that we’re trapped outside while the wild leaf-reflectance party rages in infrared-land. Leaves aren’t terribly hard to find; there’s no great reason the ancestral human would have needed keen leaf-detection skills. Looking for greenish stuff worked fine; it didn’t really matter whether the stuff was “photosynthetically active,” either.

Ah, but from space…

There are plenty of reasons to care about distinguishing active vs. inactive vegetation. If you’re looking at the wheat crop in Russia, you might be interested to know whether it’s flourishing or failing. (This was the motivation for the US’ first multi-spectral spy satellite.) If you’re monitoring deforestation, you need to tell the difference between old-growth trees and the fast-growing shrubs that appear when they’re chopped down. Our eyes may not have evolved to see that great IR mountain, but it sure would come in handy these days.

The Landsat-1 MSS had a one sensor whose detection range fell right in the middle of the near-infrared leaf-reflectance hump. There was a problem, though—plants are far from the only things that reflect lots of infrared light. Clouds, for example, reflect pretty much everything, as do built-up urban areas. You can’t just use IR reflectance as a proxy for vegetation.

What if we use the fact that plants are green? Could we look for things that reflect both IR and green light? No, but we’re getting closer. Remember that our problem is telling plants apart from features that reflect EVERYTHING. Clouds and buildings reflect green light as well as near-IR.

Instead, the key is looking at what plants don’t reflect. I mentioned before that red light is chlorophyll’s favorite food. Plants love this stuff—they love it so much that they absorb nearly every scrap of it that hits them, reflecting hardly any. You notice that the only time you see plants with red leaves is in winter, when photosynthesis has clocked out and gone home for the season. Under normal operating conditions, every red photon that gets reflected away is a missed opportunity to make sweet, sweet Plant Chow™.

So what if we look at the difference between infrared-spectrum reflectance and red-spectrum reflectance? Bingo! Not only does active vegetation lights up, but we’re not getting totally swamped by clouds anymore! 

You don’t actually want to use just the difference, because the whole scale of values across a satellite image can be different depending on what angle you’re looking from. If the sun is bouncing directly into your detector, everything is going to look a lot brighter than if it’s reflecting at a steep angle. This isn’t so much of a problem these days (we have a ton of corrective software that digests each image before it lands on the desk of flunkies such as me) but in the early days this was a serious consideration.

The best minds of the field figured out that you can correct for the different scales by “normalizing” the two values; that is, dividing through by their sum. This gets us a number that always lies between -1 and 1, the “Normalized Difference Vegetation Index,” or NDVI.

NDVI = (IR - Red) / (IR + Red)

In practice, you hardly ever see the low end of NDVI (-1 to 0); mostly we’re concerned with the upper half of the scale. NDVI is a great way to capture the “plant-ness” of a given pixel with a single number. There have been many attempts to supplant it (some with rather pompous names, see the “Enhanced Vegetation Index” for example) but NDVI remains the king.

But this is hardly the best we can do. Humans can see a mixture of three colors at once; what a waste to only use one! What if we create a whole RGB image, but assign the colors to something other than the standard red, green, and blue? 

False is true

This is what it means when you see a “false-color composite.” Take a look at the two images below: the one on the left is “true-color;” that is, the reds/greens/blues you see on your screen correspond to the red/green/blue reflectance picked up by the satellite (in this case, the European Space Agency’s Sentinel-2). On the right, the colors are coming from different parts of the EM spectrum. The amount of red in each pixel corresponds to the reflectance in the shortwave infrared (SWIR); the green, to near-infrared (NIR); the blue, to red. It’s a strange mapping to wrap your head around, but the result is actually quite familiar: plants show up as a vibrant green, far easier to pick out than they are in the “true-color” image.

There’s a whole suite of these combinations, leveraging the many different parts of the spectrum to help us humans pick out features of interest with our limited, limited eyes. There’s a combination for detecting burns, for analyzing geology, for studying coastal waters, and more. Landsat-1 had four detection bands; modern satellites have a dozen at least. Although we mortals can only examine them three at a time, the same is not true for our silicon servants. Computer algorithms are capable of using every single band at once for land cover classification or feature detection. Take that, bees!

A digression: When I was first exposed to astronomy, back in elementary school, the textbooks and documentaries would show beautiful pictures of galaxies and star-clusters, vibrant in purples and yellows and blues. Later on I was shocked to learn that these were “false-color” images—space doesn’t really look like that at all! If you were an astronaut, floating in the void and looking out through the visor of your helmet, the stars would just be bland white, the same as they look to us dirtside.

This felt like a betrayal. The teachers had lied to me; shown me glimpses of a rainbow universe that did not exist. They might as well have given us “artist’s renderings” instead, I thought, either that or just show us the real stars—blank white against a dark canvas.

But now—having seen the flower with eyes I do not have—I am more sympathetic. It is not deception or mere whimsy to display the galaxies as they appear in the x-ray or gamma spectrum, in whirls of magenta around a glowing core. This is not mere wishful thinking; it is what the universe would look like, to one whose perception was unchained from the tyranny of rods and cones, who could look across it like a many-eyed seraph and see its hyperspectral glory full and undivided.

Inferential distance

When people ask what I do for a living, I usually tell them that I “look at pictures of plants, taken from space.” That’s true. But in doing so, I feel like I’ve caught a glimpse through a crack in the doors of perception, a brief flash of all the mysteries and wonders of the cosmos that are hidden from the embodied creature. Maybe my mind is addled from staring too long at the pixels on a computer screen. But when I think about the tiny hypercube of reality that we can perceive, and all that lies outside it, I feel the same sense of awe that strikes me when I look up at the full moon on a clear night and think, “We have been there!”

So in future, when someone asks me what I do, I will point them here. The name of this blog borders on the mystical: telesthesia is the ancient word for “remote sensing,” not in the modern sense but with dowsing rods and crystal balls. To those ancient seers, Landsat would seem as mystical as the eyes of Argus Panoptes—and we have magic beyond their farthest imaginings.

Open your eyes, and look.


No comments:

Post a Comment

In Praise of Trophy Gold