FAQ

What is Imager?

Imager is software that allows you to virtually prototype camera systems. Imager models a camera system, allowing you to see the influence of imaging conditions (low light, camera motion, etc.) and component specifications (pixel size, sensor type, lens quality) on the images that a camera makes.

Why would I use Imager?

Imager helps you understand the impact of varying component specifications on the image quality of a camera system. See how Imager can help me.

Who can use Imager?

Imager is for anyone working with cameras and optics.

Camera Engineers:  Simulate and understand the interaction between the various components of an imaging system.

Robotics and Automation:  Test how you will perceive the world before incurring expense building a camera system.  Validate your needs for field of view, depth of field, resolution, etc. so that you purchase the right camera components the first time.

Computer Vision, Algorithm Developers: Understand how your image processing algorithms function on an image with real-world imperfections due to changing light conditions, lens aberrations, and object motion.

Educators and Students: Develop an intuition for the mathematics describing camera performance by visually understanding the interplay of system component properties on camera performance, or examine in detail the specific effects of one component (e.g. lens focal length, pixel size, sensor sensitivity).

Product Marketing and Management:  Translate your visual performance requirements for a camera system into numbers and component configurations that an engineer can use to develop a system.

How is Imager different than Zemax and other lens design software?

The purposes of Imager and Zemax are different. Imager is used to model systems; Zemax is used to design lenses. In many ways you can consider them to be complementary products since Imager can be used to model how a lens will perform within a system.

Imager has several unique features when compared to Zemax:

  1. Illumination Source and Object: Imager’s radiometric model calculates the number of photons that reach the sensor and the number of generated electrons. This influences how an image appears in varying lighting conditions and with different sensor dynamic ranges. Imager also models motion, so you can see how an image will be affected by hand shake or capturing moving objects. In Zemax there isn’t a way of setting the source spectrum, the illuminance, or the object motion for example.
  2. Parametric Lens Model: Although we generate wavefronts as if they came from a real lens, our model allows the user to vary lens parameters independently. This allows you to determine the appropriate effective focal length and F/# and to find acceptable limits for MTF, distortion, and relative illumination, for example. In Zemax, in order to change lens parameters such as lateral color, relative illumination, or distortion, you have to design a new lens which doesn’t easily allow the user to find the right performance thresholds and specifications for visual image quality or algorithm performance.
  3. Sensor: Imager includes a detailed sensor model with not only the typical pixel geometry (pixel size, pitch, number of pixels), but also with parameters that control the noise, the pixel coupling on- and off-axis (how well light is transferred to the photo diode), and spectrally dependent quantities like quantum efficiency.
  4. Algorithms: We include basic image processing algorithms, and allow the user to save raw data. The basic algorithms include demosaicing, auto white balance, and lens shading correction.
  5. Camera Metrics: Because we have a system model, we can calculate camera metrics like field of view, resolvable features, depth of field, dynamic range, and SNR.
How do I get help and learn to use Imager?

Three ways – take advantage of Imager’s detailed built-in help and explanations, view the tutorial videos on our website, or contact us.

Do you offer educational discounts?

Yes. Please contact us for more information. For workforce education, contact us if you would like short courses at your company on using Imager to improve your products and services.

Can I use my own image for an object scene?

Yes. You can load your own RGB or monochrome image into Imager to use as the scene.

I want to use a certain sensor, how do I find characteristics to enter into Imager?

The basic sensor parameters (pixel count, size, CFA spectra) can be found from a manufacturer’s product brief by searching for that particular sensor model. More advanced parameters, i.e. full well capacity, read noise, dark noise, microlens shift, are found in the manufacturer’s detailed data sheets or by finding a third party review of that sensor. Using the search terms “color camera sensor review” or “mono camera sensor review” leads to a wealth of detailed performance information on a wide range of sensors.