Thursday, July 2, 2020

Science: About Infrared Photography -- How to develop Next Eyes



Would you like to capture light unseen and make photos like these?

A basic principle is that color of any object depends on the slice of light from the sun's full spectrum that is reflected off the object and reaches our eyes. 

Another principle is, our eyes and brains trick us.


A healthy leaf, or granny-smith apple reflects green light, which is what we see.  However, it also reflects light we can't see, in the near infrared. The sun radiates a much larger range of energies, colors, light--about 10x more ranges of light than our eyes can see. Our eyes evolved to see in the peak solar radiance. As shown in the plot below, most of the other energy is in the infrared at wavelengths longer than 700 nanometers (nm). The plot shows on the vertical axis the relative amounts of solar energy in each color.  The  peak is equal to one where we perceive yellow/green, and drops quickly through the infrared, shown going right along the horizontal axis.



Our eye evolved to sense a narrow portion of the solar light.  Probably, goes a hypothesis, it is narrow in order not to overwhelm the smaller processing (brain) capabilities of early species that first evolved sight organs.  Human eyes, much like other primates, evolved sensitivity of light that splits it into three colors (trichromatic vision).  Mixing of the three light spectra (colors) allows us to perceive a myriad of combined colors.  The cells in our eyes that perceive color are called cone cells.  Science denotes them as S, L and M.  These are "Short wavelength" cone cells that sense bluish light, "Long wavelength" cone cells that sense reddish light, and "Mid wavelength" cone cells that sense greenish light.  The"ish" is because the pigments have significant color overlap with each other.



If we take a normal color photo, say of the Golden Gate Bridge, and decompose it into the separate L, M, S cone cell images, we would see the red, green and blue images emphasis different aspects of the full color scene.  For example, in the L (red) image, the red paint on the bridge would be bright, while the blue water and sky would be darker.  The M (green) image would show green brush, grass and trees as bright, and the sky or bridge as a little darker.  The S (blue) image would show the water and sky as bright, and other elements as darker.  Our brain mixes these together into individual perceptions of the full color image.

I say perception, because even to date, scientists have scant data on how varied are the number of cone cells in each individual.  For example, in one small survey of studies where various methods were used to count each cone cell to determine the ratio of cone cell types, it was found that the mean estimates of L- to M-cone ratios in individuals were highly variable between observers, ranging from 0.33:1 to 10:1 in one case.  

That is to say, your green may be more red than mine.  It may be, Color is a matter of individual eyes and brains. Your brain tricks you into thinking that your individual level of red cone cells means the same as what others perceive as red. This is just the start of tricks.

One thing is becoming clear:  Humans have very little sensitivity to light beyond visible, including the infrared.



The overlap of the cone cell colors is shown in the plot above, with the M (green) and L (red) having the most overlap--which is implicated in color blindness when it is very overlapped.  Some eyes have cone cell pigment differences due to a genetic difference.  This difference stops, however, at the infrared.  None of us, on record, has sight well past 800 nm. In my own studies I have shown some sensitivity at about 860 nm, using a bright diode laser pointer (not unlike the lights in night-security cameras), and zero sensitivity at 940 nm using a bright laser.  (Yes, I know, never ever stare at a laser beam!)

But what if we evolved, or enhanced human vision to sense the infrared?  How would the world look?


A fourth color in our perception, one sensitive to the infrared, would alter our view of the world significantly.  Bumble bees have eyes sensitivity in the ultraviolet, and see UV induced fluorescence on the bright dye molecules producing the colors of flowers.  Some have tried to mimic this effect through UV photography of flowers.  Here is an example (not mine):




UV photo of Succulent Cluster (Source Credit to Craig Burrows.)

Can you imagine what flowers and the world would look like if we had a fourth color of cone cell sensitive to the UV?  Again, what if we had a fourth or fifth color added from the near infrared?  X-ray?  Cosmic particle color? Thermal?


Not only would the world appear bizarrely different, we would evolve incredible capabilities to do all kinds of functions with our Next Eyes.

For today, we can at least appreciate a part of this.  We can use our digital cameras to mimic what it would be like to have a fourth color.  First, let's see how a digital camera senses and creates color photos.



The detectors are pixels. And while on the screen it looks like each pixel has all the colors, it doesn't.  The screen and the camera use alternating colors in an array (like a checkerboard) to mix the colors we capture and display. The individual pixels are so small, we can't see them individually.  For example, in your smart phone camera, you probably have individual pixel detectors as small as 1.5 microns in length.  The smallest thing an unaided eye with perfect vision can see is about a hundred times bigger than that ~ 100 microns. That is about the width of the finest hair on body, unless you are a hairy guy.  (It's sill larger than what you see on a bald guy, though.)

What's more interesting, even while camera manufacturers are trying to mimic the eye-color response in the visible, they must handicap the full camera's abilities to limit it for our limited eyes.

The above plot has a bit of detail. Bear with me.  The blue line shows the blue response of the little blue filter over a given camera pixel detector.  The red, and green lines likewise. What you may notice, unlike with the eye plot, this plot shows the colors extend beyond the visible.  The blue filter line, for example, goes as short (remember S(hort) cone?) as 300 nm, which is well into what we call the UV-A region of light.  The red filter line extends well beyond visibly red light, past 700 nm and continues all the way out into the near infrared.  And even more interestingly, the blue and the green, which seem to cut off in the visible (at ~520 nm for the blue, ~670 nm for the green), they each rise again in the infrared (about 720 nm for both).

Camera manufacturers place another, larger filter over the entire array of pixel detectors.  This filter, in purple above, reduces the overall sensitivity of the red,green,blue filters and pixel detectors.  It narrows up the colors so they only see "visible" like our eyes do.  

The camera is purposely handicapped. This filter is often called a hot-mirror or an infrared cut filter.  It's job is to reject transmitting the UV-A and near infrared to the camera sensor.



But it is actually not terribly difficult, for some cameras, to remove this filter.  I found out about this is late 1999 when I had a happy accident at work after dropping a camera.  If you read that little story, you will see that it does take a little technical skill to do this properly.  

Ever since breaking my first digital camera, I've "broken" well over a hundred of them (between personal cameras and work cameras).  There are shops that do this for you in clean-room environments (just search for "infrared camera conversion"). I won't encourage any particular shop here, and expect you should do your own research if you're interested.

Once your camera is free to see all of its potential,  you will untap more of your Next Eyes potential.  Here's the Golden Gate Bridge photographed with a variety of other filters that let the full spectrum or selective portions of the spectrum from UV-A through Visible to near infrared (shortened to UVNIR from here out).




One thing you noticed, I'm sure, is the leaves of the bushes are bright compared with how we see them.  Here, look at what I mean:

If you want to understand why this is, please visit my post on the science of leaves and infrared. It geeks out on leaf biology with cross sections of leaf anatomy and more.  Very cool stuff.

I have personally invested money and time into developing new optical filters that selectively ratio (like our eye-colors) the color of infrared that the leaves provide.  Hence I can photograph with one filter, leaves that are purely white (devoid of color preference--all reflecting in equal ratio), leaves that are pinkish, leaves that are bluish, leaves that are greenish, leaves that are orangish, leaves that are purplish.  Yes, these are colors generating by selectively allowing and disallowing portions of the light spectrum reflected from the leaves, and only filtering through the desired colors into the camera, which captures a purposeful ratio of light spectrum.

I also do this kind of thing for the sky, and it makes beautiful sunsets and sunrises from what the limits of what could be seen by the unaided eye.



I have written about this twilight filtering in several blogs (see here, and here, and here). The last one was written while waiting in line for Covid antibody testing. 

Just so we're clear, this is not some software Instagram like filter that takes your phone camera photos and turns them into pieces of art.  This is physics, not photoshop!


Different properties of materials will give vastly different outcomes in a visible and in an infrared photo.  There are a lot of details in that photo comparison above. You'll have to spend some time looking at an item in each top/bottom photo to see the physical light differences.  

I set it up as a little home experiment to show how different materials, glass, cloth, lights even--all have different transmission / reflection / emission in the visible and in the infrared.  I do this geeky stuff for work, so play along with me and pretend you really find it fascinating. I'll smile and feel validated.
Grapes are one of my favorite examples.  I stumbled into this effect at Le Vignoble in Napa Valley back in 2007, long ago.  (Yes, I'm probably older than you, and probably not wiser.)



The butterfly is another favorite for the infrared.  


If you find this Next Eye art fascinating, then let me persuade you of something:  I think it is a new art for for the 21st century.  I call it hyperceptivism.  I wrote about it here and here, and among several other posts (such as this, and this).  The last one ties the fascinating species of trees to this new art form and the sociology it presents.

The form also has made me contemplative on death (see here and here) and on life (see here, and here, and here, and here).  Some of these are melancholy, some are happy-go-lucky.  We can be complex too.