Multispectral System Modeling

Multispectral

Multispectral System Modeling

In this example we model a system that uses a multi-spectral scene to generate an RGB output image. Multi-spectral scenes are useful if your system uses both visible and NIR illumination, if you are modeling the impact of shifting the cutoff wavelength of an IR cut filter, or if you want to model color very accurately.

In the following example we use a data set that has 31 images that are distinct spectral channels that range from 400nm to 700nm in 10nm increments for each spatial pixel. The 31 images that represent each of the spectral channels have 3072×2048 pixels with 16 bits of “grey scale” intensity levels. The input data set was generated by Bruce Lindbloom and can be downloaded from his website at http://www.brucelindbloom.com/.

Some of the key system parameters are shown here. The full set of parameters can be found in the Imager system file.

Source:

  • For all of the examples below we are using a blackbody source and varying its temperature to model either Illuminant A (Incandescent / Tungsten) and Illuminant D65 (Noon Daylight).

Object:

  • Described above
  • Object spectrum is changed to match the spectral band of each of the 31 input images. The example Imager system file shows the object spectrum for the image spectral band that covers the 400nm to 410nm band.
  • For subsequent images the object spectrum is changed to match the spectral band.

Lens:

  • F/2
  • 12mm focal length
  • Diffraction limited

Sensor:

  • 1920 x 1440
  • 3 micron pixels
  • Exposure time set so that the full well is filled when capturing the full spectrum
  • Full well: 8k e-
  • Bit depth: 8
  • Read noise: 5 RMS e-
  • Dark current: 5 e-/second
  • Quantum efficiency and color filter array transmission are shown below

Processing:

  • Demosaicing
  • Color correction/auto white balance are turned off

The overall process involves running each of the 31 spectral channels through Imager, calculating the number of electrons generated, summing the 31 spectrally distinct images and quantize to generate the final image.

For example, simulating three of the channels results in the following images:

multispectral1

To accurately model multi-spectral systems, we want to count the electrons generated for each of the 31 channels, sum them, and then quantize down to our final bit depth. In order to count electrons, we are going to temporarily set the sensor bit depth so that the number of digital counts matches the full well; in this case, we set the bit depth to 13 bits (8191 counts).

We also need to consider how the noise is modeled. In order to accurately represent the correct shot noise, read noise, and dark current, we need to use the full spectrum exposure time, and we need to divide the dark current applied to each spectral image by the number of input channels (31 in this case) and the read noise by the square root of the number of input channels.

The spectral transfer functions for this example are shown below. For this example the source spectrum is from direct sunlight or a tungsten source, the lens transmission is set to the common plastic PMMA, the filter transmission is set to cut out the NIR light at wavelengths beyond 670nm, the color filter arrays are set to the common dye developed by Fuji, and the sensor QE is set to a typical value for CMOS image sensors.

Imager Spectrum

For each channel we now save high dynamic range data. To create the final image, we sum up all of the simulations from the individual channels, divide by the total sensor count, which is (2^13-1) in this case, multiply by number of counts we want in the output image, which is (2^8-1) in this example and we save it as an 8 bit image. These final steps ensure that we have the correct quantization noise. The output is then the RGB image that results from the multispectral scene without any color correction.

As an example, here is what the image looks like when using illuminant A with our full camera system model:

Simulation using Illuminant A – Incandescent / Tungsten

Here is an example with illuminant D65, again with our full camera system model:

Simulation using Illuminant D65 – Noon Daylight

The following two images show the results if we set our IR cut filter to a wavelength of 650nm and 700nm.  With the IR cut filter set to the longer wavelength, the image has much improved SNR in low light, but with less color saturation.  Overall you see more of a reddish hue in the 700nm IR cut filter image since the red color filter array transmits much higher in the near infrared than the blue or green color filter arrays.

Simulation with a 650nm IR cut filter

 

Simulation with a 700nm IR cut filter

You can find all of the other system simulation settings in the Imager system file used in this example here and you can find the Python script that was used with the Imager 2016.05 release below.

import cv2
import numpy as np
import json
import os
import subprocess

# Point to ImagerAgent and to the Input Imager System File
ImagerAgent = 'C:\\Program Files\\FiveFocal\\ImagerAgent'
InputJsonFile = 'multispectral.json'

# Set spectral range and step size
spectrum = np.linspace(400,700,31)

# Scale the noise so the final simulation noise is accurate
root = json.load(open(InputJsonFile, 'rt'))
read_noise = root["system"]["sensor"]["read_noise"]["value"]
dark_current = root["system"]["sensor"]["dark_current"]["value"]
root["system"]["sensor"]["read_noise"]["value"] = read_noise / np.sqrt(len(spectrum))
root["system"]["sensor"]["dark_current"]["value"] = dark_current / len(spectrum)

for i in range(len(spectrum)):
    InputImage = ('SpectralDeltaE%.0fnm.tif' % (spectrum[i]))

    # convert tif images into hdr format for importing into Imager
    im = cv2.imread(InputImage).astype('float32') 
    im /= 255.0
    cv2.imwrite('temp.hdr', im)
    
    # Set the spectral image and its spectral reflectance
    path = os.path.realpath('temp.hdr') 
    path = path.replace('\\','/')
    root["system"]["object"]["scene"]["value"] = path
    root["system"]["object"]["spectrum"]["value"]["blue"]["value"]["x"] = [spectrum[i]/1000, spectrum[i]/1000 + 0.001, spectrum[i]/1000 + 0.009, spectrum[i]/1000 + 0.01] # increment the spectrum

    # Simulate the image
    process = subprocess.Popen([ImagerAgent, "--extended-image", '-', 'out.png'], stdin=subprocess.PIPE)
    process.communicate(json.dumps(root).encode('utf-8'))
    return_code = process.wait()
  
    # read in simulated image
    im = cv2.imread('out.hdr', cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH).astype('float32')
    
    if i == 0:
        Final_Image = np.zeros(im.shape)    
    
    # Add the last spectral simulation to the previous simulations
    Final_Image = Final_Image + im

# Quantize and save
Final_Image = Final_Image * (2**8 - 1)
cv2.imwrite('final_image.png', Final_Image)
No Comments

Post A Comment