SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research

Outdoor shadow detection by combining tricolor attenuation and intensity

Jiandong Tian*, Linlin Zhu and Yandong Tang

Author Affiliations

State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Nanta Road, Shenyang, China

For all author emails, please log on.

EURASIP Journal on Advances in Signal Processing 2012, 2012:116  doi:10.1186/1687-6180-2012-116


The electronic version of this article is the complete one and can be found online at: http://asp.eurasipjournals.com/content/2012/1/116


Received:17 January 2012
Accepted:28 May 2012
Published:28 May 2012

© 2012 Tian et al; licensee Springer.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Shadow detection is of broad interest in computer vision. In this article, a new shadow detection method for single color images in outdoor scenes is proposed. Shadows attenuate pixel intensity, and the degrees of attenuation are different in the three RGB color channels. Previously, we proposed the Tricolor Attenuation Model (TAM) that describes the attenuation relationship between shadows and their non-shadow backgrounds in the three color channels. TAM can provide strong information on shadow detection; however, our previous study needs a rough segmentation as the pre-processing step and requires four thresholds. These shortcomings can be overcome by adding intensity information. This article addresses the problem of how to combine TAM and intensity and meanwhile to obtain a threshold for shadow segmentation. Simple and complicated shadow images are used to test the proposed method. The experimental results and comparisons validate its effectiveness.

Keywords:
shadow detection; tricolor attenuation model (TAM); intensity image

1 Introduction

Shadow detection is highly desirable for a wide range of applications in computer vision, pattern recognition, and image processing. As shown in Figures 1 and 2, shadows can be divided into two types: cast shadow and attached shadow (also called self-shadow). The attached shadow is the part of an object that is not illuminated by direct light; the cast shadow is the dark area projected by an object on the background. Cast shadow can be further divided into umbra and penumbra regions. Umbra is the part of a cast shadow where the direct light is completely blocked by its object; penumbra is the part of a cast shadow where direct light is partially blocked.

thumbnailFigure 1. One result of the method proposed in this article. Left: original image. Middle: the red ellipse denotes the attached shadow; the green one denotes the cast shadow; the blue one denotes the umbra; the yellow one denotes the penumbra. Right: the result of our method, in which all kinds of shadows are detected.

thumbnailFigure 2. Shadow will occur when direct sunlight is occluded.

As shown in Figure 2, the illumination on non-shadow region is daylight (direct sunlight and diffused skylight); that on penumbra is skylight and part of sunlight, while on umbra is only skylight. Since skylight is a part of daylight, the pixel intensity in shadow is lower than that in non-shadow background, i.e., there exists intensity attenuation. The light source and the intensity of shadow region and non-shadow region are listed in Table 1.

Table 1. Light sources and intensity of shadow and non-shadow region

Denoting <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M1','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M1">View MathML</a> as a shadow pixel value vector and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M2','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M2">View MathML</a> as the pixel value vector of the corresponding non-shadow background, the relationship between <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M3','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M3">View MathML</a> and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M4','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M4">View MathML</a> is

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M5','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M5">View MathML</a>

(1)

where<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M6','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M6">View MathML</a> denotes the tricolor attenuation vector. The relationship among <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M7','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M7">View MathML</a> is called Tricolor Attenuation Model (TAM) [1] which can be represented by:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M8','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M8">View MathML</a>

(2)

where m=1.31 and n=1.19.

TAM describes the attenuation relationship between shadows and their non-shadow backgrounds in the three color channels, and this relationship can be used for shadow detection. TAM-based subtraction image (hereafter TAM image) is obtained by subtracting the minimum attenuation channel from the maximum attenuation one. Based on the TAM image, the multi-step shadow detection algorithm is previously proposed [1]. Its main steps includes

1. Segmenting the original image and calculating TAM in each segmented sub-region.

2. Simply using the mean value over each sub-region to binarizate the TAM images and to obtain initial shadows.

3. Simply using the mean values in three color channels, in each sub-region, as the thresholds to verify and refine the initial shadows (to obtain detailed and more accurate results).

Generally, the method [1] is an automatic one and can work on single still images, even with complex scenes. However, there are two unsolved problems in the method.

1. It needs segmentation. Although the method is not sensitive to little segmentation error, it is not an easy work to get a satisfying segmentation result (shadows and their non-shadow backgrounds are segmented into same regions). For some images, serious segmentation errors may lead to bad shadow detection results.

2. It uses four simple mean values as thresholds in the two key steps (steps 2 and 3). One threshold is used for initial shadow segmentation and three thresholds are used to obtain accuracy boundaries and details. The thresholds sometimes have noticeable influence on the final results, i.e., simple thresholds are insufficient for some images.

In this article, we try to solve the above-mentioned two problems; we combine TAM and intensity information to avoid the segmentation step and derive only one threshold to substitute previous four simple ones. The new proposed method in this article is simpler and meanwhile can achieve similar or better results.

2 Previous studies

Shadows, a common phenomenon in most outdoor scenes, take extensive effects in computer vision and pattern recognition. It brings many difficulties to computer vision applications such as segmentation, tracking, retrieval, recognition. On the other hand, shadows in an image also provide useful information about the scene: they provide cues about the location of the sun as well as the shape and the geometry of the occluder. Overall, dealing with shadows is an important and challenging task in computer vision and pattern recognition.

The most straightforward feature of shadow is that it darkens the surface it casts on, and this feature is adopted by some methods directly [2,3] or indirectly [4,5]. Many methods assume that shadow pixels mainly change luminance but less chrominance. For example, in [6], the authors assume hue and saturation components change within a certain limit in HSV space. In [7], multiple cues including color, luminance, and texture are applied to detect moving shadows. Another commonly used feature for shadow detection is intrinsic feature. Intrinsic features locate shadows by comparing the intrinsic image and the original one. Salvador et al. [8] employed c1c2c3 feature to derive intrinsic images. Finlayson et al. [9] developed a method to generate a 1D illumination invariant image by finding a special direction in a 2D chromaticity feature space. Tian and Tang [10] proposed a method to generate illumination invariant image by using the linearity between shadow and non-shadow paired regions. The intrinsic image is useful for shadow detection. However, it cannot totally eliminate the illumination effect and thus is often used in the simple scenes.

Most shadow detection methods focus on detecting moving shadows. Moving shadow detection methods can employ the frame difference technique to locate moving objects and their moving shadows. Then, the problem of shadow detection becomes differentiating the moving objects and the moving shadows. Prati et al. [11] provided a good review for shadow detection methods in video sequences. To adapt to background changes, learning approaches have proven useful. Huang and Chen [12] employed Gaussian mixture model to learn the color features and to model the background appearance variations under cast shadows. Brisson and Zaccarin [13] presented an unsupervised kernel-based approach to estimate the cast shadow direction. Siala et al. [14] described a moving shadow detection algorithm by training the manually segmented shadow regions. Joshi and Papanikolopoulos [15] used SVM and co-training technique to detect shadows. Compared with static shadow detection methods, moving shadow detection methods can employ the powerful background subtraction techniques. Therefore, the majority of moving shadow detection methods cannot be directly used to detect static shadows in single images.

As detecting moving shadows has made great progress, detecting it from a single image remains a difficult problem. Wu and Tang [16] used the Bayesian approach to extract shadows from a single image, but it requires user's intervention as the input. Panagopoulos et al. [17] used the Fisher distribution to model shadows, but this approach needs 3D geometry information. As a special application of shadow detection in single image, literatures [3,18,19] focus on detecting shadows in the remote sensing images. Lalonde et al. [20] proposed a learning approach to train a decision tree classifier on a set of shadow sensitive features to detect ground shadows in consumer-grade photographs. Guo et al. [21] proposed a learning-based shadow detection method by using paired regions (shadow and non-shadow) for a single image. Learning methods can achieve good performance if the parameters are trained well. However, they will fail when the test image is vastly different from the images in the training set [20]. In the previous study [1], we proposed the TAM-based shadow detection algorithm. The algorithm is automatic and simple but it depends more or less upon priori segmentation and the four simply chosen thresholds. The improved algorithm described in Section 3 can address these two problems.

3 Method description

To obtain TAM image, we first calculate the mean values in three color channels <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M9','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M9">View MathML</a> of original image F by

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M10','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M10">View MathML</a>

(3)

where <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M11','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M11">View MathML</a> denotes the kth pixel of image F in R channel, and M is the number of pixels.

In Figure 3, tricolor attenuation order for the first original image is <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M12','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M12">View MathML</a> and for the second one is <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M13','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M13">View MathML</a>, therefore the corresponding TAM images are formed by <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M14','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M14">View MathML</a> and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M15','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M15">View MathML</a>, respectively. Shadows are dark in TAM images, which provide strong information for shadow detection. However, sometimes the TAM-based channel subtraction procedure may cause not only shadows, but also some other objects become dark. Just take the second TAM image of Figure 3 as an example, the TAM image is formed by subtracting the blue channel from the red channel, not only the shadows but also some blue objects (e.g., the flowerpot) become dark. The flowerpot may be falsely classified as a shadow after binarization. TAM assumes a shadow and its non-shadow background share an identical reflectance property, that's why our previous study [1] requires a priori segmentation to ensure shadows are detected on uniform reflectance regions. Additionally, the subtraction will smooth pixel values because of the high correlation among R, G, and B components [22]. The smoothing may cause details missing in detection results. The first image of Figure 4 demonstrates that there are false detections and details missing if we only employ TAM (without segmentation) to detect shadows.

thumbnailFigure 3. Shadows are dark in TAM images. Left: the original images. Right: TAM images.

thumbnailFigure 4. Comparison of shadow detection results between only using TAM image and after combining intensity image. Left: Shadow detection result only using TAM image. Right: shadow detection result after combining intensity information.

As mentioned above, though TAM can provide information for shadow detection, it may suffer from false detection and details missing problems. These problems caused by luminance information are lost during the channel-subtraction procedure. Fortunately, the lost information in the TAM image can be compensated by intensity (grayscale) image. The problem then becomes how to combine intensity image with TAM image. In the following, we will give a method to address it meanwhile to derive a threshold for shadow segmentation.

Combined image Z is obtained by combing TAM image X with intensity image Y as follows:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M16','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M16">View MathML</a>

(4)

where α is the weight coefficient. We define the objective function as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M17','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M17">View MathML</a>

(5)

where S(T) denotes the shadow determined by a threshold T.

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M18','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M18">View MathML</a>

(6)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M19','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M19">View MathML</a> denotes the mean value of shadow regions in Z; <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M20','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M20">View MathML</a> denotes the mean value of the non-shadow regions in Z. The subtraction of them can measure the difference between the shadow regions and the non-shadow regions (the subtraction is always positive, which will be proved in Appendix). The difference between them is weighted by a quadratic function G(T), defined as follows, to avoid too high or too low T.

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M21','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M21">View MathML</a>

(7)

in which u is the mean value of image Z. The best T should make the mean value of shadow regions and that of non-shadow regions have the biggest weighted difference.

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M22','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M22">View MathML</a>

(8)

Given T, S can be determined by using Equation (6).

Denoting <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M23','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M23">View MathML</a> and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M24','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M24">View MathML</a>, the weight α is defined as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M25','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M25">View MathML</a>

(9)

κ and η measure the contributions of X and Y on getting the threshold. The exponent of <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M26','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M26">View MathML</a> heightens the difference of the contributions and make sure α > 1 for the following two reasons.

(1) The range of variation of X is lower than that of Y (as stated above, the TAM-based subtraction will smooth pixel values).

(2) Shadow detection relies mainly upon X; Y is mainly used to obtain precise result (see Figure 4 and refer to [1]).

α is initialized with <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M27','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M27">View MathML</a>. Repeating (4)-(9) to update T and α until <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M28','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M28">View MathML</a>.

4 Experimental results

Figure 5 shows the result comparisons between the algorithms proposed in this article and those state-of-the-art in [1,20,21], respectively. The original image of the first row is a simple image of a person's shadow, half on the grass and half on the road. By using the method presented in this article we achieve quite similar result as that by [1] and better than both by [20,21]. The original image of the second row in Figure 5 is an aerial image with complex content. Most attached and cast shadows can be detected by the proposed method. The weakness is that some trees and some solar panels in the bottom left of the image are incorrectly classified as shadows compared with the result given in [1]. The result by the study [21] misses some shadows of the house and the tree in the left part of the image while the result by the study [20] misses most shadows. The original image of the third row in Figure 5 is a forest image with complicate texture that was taken from 100 m high, with some small sparse cast shadows. They can be detected by the algorithm proposed in this article. Especially, the black words marking date and time on the top of the image are not falsely classified as shadows, which may be inevitable by intensity-based shadow detection methods. The result by the study [1] is over detected and has false alarms; the result by the study [21] misses many shadows and that of [20] misses some in the bottom of the image. The original image of the fourth row in Figure 5 contains two cast shadows on the ground and one attached shadow on the leg. All of them can be detected by the algorithm proposed in this article. The result by the study [1] misses some details; the result by the study [21] misclassifies the brighter region at the up-right corner as a non-shadow one; the result by the study [20] misses most shadow of the tree. Compared with method [1], the method proposed here does not need segmentation and requires only one threshold. Compared with [20,21], the proposed method does not need training. These advantages may make the proposed method easier to use.

thumbnailFigure 5. Comparisons with state-of-the-art methods [1,20,21]. First column: original images; second column: the shadows detected by the algorithm proposed in this article; third column: the shadows detected by the algorithm proposed in [1]; fourth column: the shadows detected by the algorithm proposed in [21]; fifth column: the shadows detected by the algorithm proposed in [20].

More results of the method are listed in Figure 6. These images contain various shadows: attached shadows and cast shadows on ground, road, grass, etc. The results show that shadows can be detected correctly.

thumbnailFigure 6. More results of the method. Left: original images; right: shadow results.

Because shadow detection usually is a preprocessing step of practical applications, fast computing is important. Time consuming of the four methods is tabulated in Table 2. From the comparisons, we can find that our method is faster than the other there methods. The experiment was conducted on a computer with Intel (R) Core™ 2 Q8400 2.66 GHz CPU, 2 GB RAM memory. The programs were compiled with Matlab R2010b.

Table 2. Comparisons of time consuming of the four methods

5 Conclusion

In this article, we propose a shadow detection method based on combining TAM image and intensity image. In previous study [1], TAM information and intensity information are used separately. Shadow detection only relies on TAM information, and it needs a rough segmentation preprocessing step; intensity information is simply used to improve the boundary accuracy and details of the detected shadows. The effective combination of them in this article allows that the new method is free from segmentation. Furthermore, the new method only requires one threshold to detect shadows and handle the details simultaneously. These advantages make the proposed method easier to use and more robust in applications.

Competing interests

The authors declare that they have no competing interests.

Appendix

Given an image g R2. Denoting <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M29','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M29">View MathML</a> as the mean value of pixels whose values smaller than T, <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M30','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M30">View MathML</a> as the mean value of pixels whose values lager than T, and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M31','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M31">View MathML</a> as the mean value of the whole image, we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M32','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M32">View MathML</a>

(1a)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M33','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M33">View MathML</a>

(2a)

Proof:

Denoting ni as the number of pixels at level i, we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M34','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M34">View MathML</a>

(10)

For ∀ T ∈ R, we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M35','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M35">View MathML</a>

(11)

Further we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M36','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M36">View MathML</a>

(12)

Multiplying <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M37','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M37">View MathML</a> on both sides of it, we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M38','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M38">View MathML</a>

(13)

After adding the same item on both sides, we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M39','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M39">View MathML</a>

(14)

Further we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M40','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M40">View MathML</a>

(15)

The form can be rewritten as

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M41','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M41">View MathML</a>

(16)

That is

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M42','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M42">View MathML</a>

(17)

When <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M43','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M43">View MathML</a>, we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M44','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M44">View MathML</a>

(18)

Similarly, we can obtain

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M45','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M45">View MathML</a>

(19)

and

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M46','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M46">View MathML</a>

(20)

From Equations (18) and (20) we obtain

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M47','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M47">View MathML</a>

(21)

According to Equation (21), we have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/116/mathml/M48','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/116/mathml/M48">View MathML</a>

(22)

   □

Acknowledgements

This study was supported by the National Natural Science Foundation of China (Grant No. 61102116).

References

  1. J Tian, J Sun, Y Tang, Tricolor attenuation model for shadow detection. IEEE Trans Image Process 18(10), 2355–2363 (2009). PubMed Abstract | Publisher Full Text OpenURL

  2. K Barnard, G Finlayson, Shadow identification using colour ratios. Proceedings of the IS&T/SID Eighth Color Imaging Conference: Color Science, Systems and Applications (Scottsdale, Arizona, USA, 2000) 8, pp. 97–101 PubMed Abstract OpenURL

  3. K Chung, Y Lin, Y Huang, Efficient shadow detection of color aerial images based on successive thresholding scheme. IEEE Trans Geosci Remote Sensing 47(2), 671–682 (2009)

  4. W Zhang, X Fang, X Yang, Q Wu, Moving cast shadows detection using ratio edge. IEEE Trans Multim 9(6), 1202–1214 (2007)

  5. A Leone, C Distante, Shadow detection for moving objects based on texture analysis. Pattern Recognit 40(4), 1222–1233 (2007). Publisher Full Text OpenURL

  6. R Cucchiara, C Grana, M Piccardi, A Prati, Detecting moving objects, ghosts, and shadows in video streams. IEEE Trans PAMI 25(10), 1337–1342 (2003). Publisher Full Text OpenURL

  7. M Yang, K Lo, C Chiang, W Tai, Moving cast shadow detection by exploiting multiple cues. IET Image Process 2(2), 95–104 (2007)

  8. E Salvador, A Cavallaro, T Ebrahimi, Cast shadow segmentation using invariant color features. Comput Vis Image Understand 95(2), 238–259 (2004). Publisher Full Text OpenURL

  9. G Finlayson, MS Drew, C Lu, Entropy minimization for shadow removal. Int J Comput Vis 85(1), 35–57 (2009). Publisher Full Text OpenURL

  10. J Tian, Y Tang, Linearity of each channel pixel values from a surface in and out of shadows and its applications. IEEE Conference on Computer Vision and Pattern Recognition (Springs, Colorado, USA, 2011), pp. 985–992

  11. A Prati, R Cucchiara, I Mikic, MM Trivedi, Analysis and detection of shadows in video streams: a comparative evaluation. IEEE Conference on Computer Vision and Pattern Recognition (Kauai, Hawaii, USA, 2001) 2, pp. 571–576

  12. J Huang, C Chen, Moving cast shadow detection using physics-based features. IEEE Conference on Computer Vision and Pattern Recognition (Miami, Florida, USA, 2009), pp. 2310–2317

  13. N Brisson, A Zaccarin, Learning and removing cast shadows through a multi-distribution approach. IEEE Trans PAMI 29(7), 1133–1146 (2007)

  14. K Siala, M Chakchouk, O Besbes, F Chaieb, Moving shadow detection with support vector domain description in the color ratios space. International Conference on Pattern Recognition (Cambridge, UK, 2004) 4, pp. 384–387

  15. A Joshi, N Papanikolopoulos, Learning to detect moving shadows in dynamic environments. IEEE Trans PAMI 30(11), 2055–2063 (2008)

  16. T Wu, C Tang, A Bayesian approach for shadow extraction from a single image. IEEE International Conference on Computer Vision 1, Beijing, China 480–487 (2005)

  17. A Panagopoulos, D Samaras, N Paragios, Robust shadow and illumination estimation using a mixture model. IEEE Conference on Computer Vision and Pattern Recognition (Miami, Florida, USA, 2009), pp. 651–658

  18. A Makarau, R Richter, R Müller, P Reinartz, Adaptive shadow detection using a blackbody radiator model. IEEE Trans Geosci Remote Sensing 49(6), 2049–2059 (2011)

  19. J Yao, ZM Zhang, Hierarchical shadow detection for color aerial images. Comput Vis Image Understand 102(1), 60–69 (2006). Publisher Full Text OpenURL

  20. J Lalonde, A Efros, S Narasimhan, Detecting ground shadows in outdoor consumer photographs. European Conference on Computer Vision (Crete, Greece, 2010) 2, pp. 322–335

  21. R Guo, Q Dai, D Hoiem, Single-image shadow detection and removal using paired regions. IEEE Conference on Computer Vision and Pattern Recognition (Springs, Colorado, USA, 2011), pp. 2033–2040

  22. E Littmann, H Ritter, Adaptive color segmentation-a comparison of neural and statistical methods. IEEE Trans Neural Netw 8(1), 175–185 (1997). PubMed Abstract | Publisher Full Text OpenURL