ISSL_Blog

Macbeth ColorChecker Finder!

Camera parameter tuning and development of color processing algorithms require verification over hundreds of images. Locating the coordinates of calibration targets such as Macbeth ColorChecker can be labor intensive. How many hours of graduate research assistants or interns do you think is spent locating ColorCheckers?

Joking aside, we recently developed a software tool called “CCFind” to automatically detect Macbeth ColorChecker inside an image. This is our attempt to help camera manufacturers, researchers, and photographers that we come in contact with daily. It is available free, from this link.

ISSL_Blog

Is Demosaicking Dead?

At the time that I developed the adaptive homogeneity-directed (AHD) demosaicking algorithm, demosaicking was the next exciting problem in the imaging world. With a timely publication and the help of Dave Coffin (author of DCRaw) and Paul J. Lee (contributor to DCRaw), AHD succeeded as one of the most widely adopted demosaicking methods. Nearly ten years later, the sensor resolution has exceeded the resolution that the optics can deliver. Once a hot research topic, demosaicking now receives far less attention today.

“So, is demosaicking dead?”

Somehow people take me as a spokesperson for demosaicking, and I am asked this question often. Very often. My answer has been “No, demosaicking is not dead.” In fact, there’s many treasures yet to be uncovered.

First of all, it is true that demosaicking *research* has limited impact on camera design today. A poor handling on demosaicking will certainly degrade image quality, so demosaicking certainly qualifies as an “important” or at least “relevant” problem. But the newest demosaicking algorithm will not yield significantly better results than the AHD in most scenarios. In other words, the existing methods are “good enough” for practical purposes.

But, what many overlook is the fact that other problems in camera pipeline are intimately connected with the demosaicking. For example, most camera manufacturers consider either pre-demosaicking or post-demosaicking image enhancement steps. This is the result of the fact that various imaging algorithms (such as denoising, deblurring, white balance) are developed separately from demosaicking. Because of this, you should be suspicious of image processing or computer vision papers that attribute any unintended outcomes to demosaicking. (this happens quite frequently)

Let’s be practical and admit that sensors based on color filter array (CFA) will be around for a long time. The reality is that image enhancement and computer vision techniques can squeeze many extra mileages by coupling their methods to demosaicking. Paying attention to the CFA does pay dividends.  Here are some examples.

  • Joint demosaicking and denoising outperforms demosaicking and denoising applied separately.  In fact, you can couple your favorite denoising algorithm with our joint demosaicking and denoising framework.
  • Our universal demosaicking algorithm can interpolate any CFA pattern.
  • Single-shot high dynamic range imaging using conventional camera hardware is now a possibility using the properties of CFA.
  • Binning artifacts can be explained using the properties of demosaicking.
  • Color cross-talk artifacts can be explained using the properties of demosaicking.
  • A new CFA pattern requires only 10 add operations per *full* pixel reconstruction, with unmatched image quality. This is at least 10x speedup in computational complexity from any comparable demosaicking methods.
  • Most imaging experts are ignorant of the fact that demosaicking influences white balance and white balance influences demosaicking, and why this is the case.

In conclusion, demosaicking is not dead.  Demosaicking is not high on the priority list of the manufacturers (nor of the ISSL).  But demosaicking is the key to enabling new capabilities in cameras that manufacturers care about.  By parsing through our publication list, you will realize that research at ISSL thrives on our *understanding* of demosaicking, even when the development of new demosaicking method is not part of the ISSL’s goal.

ISSL_Blog

Towards A “Self-Tuning” Camera

Truth be told, camera manufacturers spend just as much time tuning parameters as the efforts they put into designing the camera itself. Even then, the professional photographers still manage to take more pleasing images than armatures because they understand what controls they have over the camera. The role of parameters in a camera cannot be overstated. So why can’t the parameter optimization be automated?

It turns out, there is a practical problem in trying to answer this question. Leaving aside the aesthetics of photography, we turn our attention to image quality assessment.

Subjective quality assessment of digital images has many tangible benefits to the design of camera systems. “Noise” and “artifacts” are best described by aspects of images that appears most unnatural to the human eye. An objective visual quality assessment (QA) metric aimed at unsupervised prediction of perceived quality expedites the advancement of camera systems by replacing the subjective analysis with an automatic one. A “self-tuning” camera must pick a set of parameters that maximizes the QA score.

But think of the challenges.  Today, there are two categories of QA. A full reference assessment (FR-QA) compares the perceived similarity of a given image to the ideal reference image—and for that, an ideal reference image must be made available.  A no reference assessment (NR-QA) does away without the ideal reference image—like human vision, it is aimed at quantifying the image quality simply by looking at the image itself.

But FR-QA and NR-QA setups are problematic for self-tuning cameras. In photography, the notion of “ideal reference image” is unambiguously defined—it is what the photographer saw. Owing to various imperfections of image acquisition hardware (noise, blur, inaccurate color, etc.), however, sensor data is far from the ideal. Clearly, ideal reference exists only in theory and not in practice.  Since our camera hardware cannot produce ideal reference image, FR-QA is out of the question.  NR-QA can be done with cameras, but it does not do what we want it to do—which is to describe the perceptual similarity between the camera output image and what the photographer saw.

So is there no hope for a self-tuning camera? We will be presenting our solution in the following paper:

Cheng, W., Hirakawa., K. (2012): Corrupted Reference Image Quality Assessment. In: Image Processing (ICIP), 2012 19th IEEE International Conference on, 2012, ((in review)).
ISSL_Blog

Color Cross-Talk

In a previous blog entry, I discussed the relationship between sensor resolution and color. In this blog entry, we focus on the analysis of cross-talk.

Unfortunately, there are inaccurate claims made by imaging experts about the relationship between CFA and cross-talk, based on some speculative reasoning. Since cross-talk affects adjacent pixels, the conventional wisdom says CFA should be designed such that each type of color in CFA is evenly spread out (e.g. lattices). The idea is that every pixel should have similar neighborhood structure. This turns out to be wrong! There is little to be gained from such analysis.

Here is the correct way to understand color cross-talk. Photon/electron leakage is essentially a sharing of pixel value with its neighbors. This is nothing more than the familiar “low-pass filter” (or a spatial blurring). When a low-pass filter is applied to CFA sensor data, this will surely introduce an ambiguity between red/green/blue color components. But there is more to this story. The low-pass filter is best understood in the Fourier domain—it attenuates spatial high-pass components of CFA sensor data.

A rigorous Fourier analysis of CFA sensor data will be covered in another blog entry. For now, let’s just say that CFA sampling “modulates” chrominance (color) components to spatial high-pass regions. The degree of chrominance that is attenuated is determined by the frequency response of the low-pass filter (point spread function of the spatial blurring). Higher the modulation frequency, for example, the stronger the attenuation by cross-talk. So, cross-talk is tolerable for chrominance components with low modulation frequency. But it is less tolerable for chrominance components with high modulation frequency. In other words, the cross-talk is dependent on the chrominance color.

We’ve done extensive analysis of cross-talk in a paper referenced below. We’ve also shown how to correct the desaturated colors. To do this right, one must also pay close attention to how demosaicking algorithm processes the high-pass components of CFA data (because demosaicking implicitly reconstructs color from high-pass CFA sensor data).

Hirakawa, K. (2008): Cross-talk explained. In: Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, pp. 677 -680, 2008.
ISSL_Blog

Megapixels and Color

Consumers love pixels… lots and lots of pixels. To squeeze more pixels into the same sensor surface, the pixel geometry is made smaller. Imagine subdividing 1 inch by 1 inch area into 10 million parts instead of 5 million. There are many unintended consequences caused by the miniaturization of pixels. In this blog entry, I’d like to discuss about color and pixel geometry.

When the two adjacent pixels are close together, there is a problem of pixel leakage. We call this problem “cross-talk.” There are two predominant types of cross-talks: optical diffraction and minority carrier diffusion. Optical diffraction refers to light leakage—a photon that lands on one pixel strays before it gets captured by photodetector of another pixel. Minority carrier diffusion refers to electron leakage—a photon strikes the photodetector and generates electrons. These electrons stray and gets captured by neighboring pixel. In either case, each pixel value is “shared” by its neighbors. In terms of image processing, the main difference between the two is that optical diffraction causes independent Poisson random variables at each pixel (because photon is one particle). Contrast this to minority carrier diffusion, which causes the neighboring pixels to have joint Poisson statistics because they shared the same photon.

We can avoid cross-talk by using bigger pixel geometries. The neighboring pixels are farther apart, so they don’t talk to each other. But this would mean you loose sensor resolution since fewer pixels would fit on a sensor surface. Microlens also helps refocus the photons towards the middle of the photodetectors. We can also introduce a barrier between pixels to prevent photons and electrons from straying. Backside illuminated sensors have the advantage that the photodetectors are closer to the sensor surface. Since photons do not have to travel as far, this lowers the risk for optical diffraction.

What does cross-talk have to do with color? In color image sensors, there are color filter arrays (CFA), where each pixel is coated with a different color filter. For example, one pixel might record a red pixel value, and its adjacent pixel might record a green value. When there is cross-talk, a portion of the red value is recorded as green, and a portion of green value is recorded as red. In other words, there is an ambiguity between different color components. The end result is that the color is disaturated.

In another blog entry, I discuss how to analyze color cross-talk, and how to correct it.

Hirakawa, K. (2008): Cross-talk explained. In: Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, pp. 677 -680, 2008.
ISSL_Blog

Welcome to ISSL Blog

Welcome to Intelligent Signal Systems Laboratory Blog. ISSL has unique capabilities and expertise on image processing and is recognized as a leader in camera processing pipeline designs. This blog is a collection of technical commentaries on signal and image processing, statistics, and color science. Our publications cover the tedious details, but this blog is intended for tech-savvy readers who want the high level understanding of image and camera processing. There is a lot of information, myths, and confusion among photographers, camera manufacturing advertisements, and even imaging experts about how camera pipeline works. This blog will hopefully shed light on some of these questions.

I will also add that I am working towards a book on camera processing pipeline. More details will be made available in near future.

In any case, I hope you enjoy the ISSL blog! Please feel free to comment on the topic and provide additional insights, feedback, or counter arguments. I welcome suggestions on other blog topics to cover. Please browse around the ISSL website and explore!

Keigo Hirakawa, PI