Post Job Free

Resume

Sign in

It Process

Location:
Vancouver, BC, Canada
Posted:
November 20, 2012

Contact this candidate

Resume:

Shadow Removal Method for Real-Time

Extraction of Moving Objects

Shinji Fukui1, Nozomi Yamamoto2, Yuji Iwahori2,

and Robert J. Woodham3

1

Faculty of Education, Aichi University of Education

Hirosawa 1, Igaya-cho, Kariya 448-8542, Japan

abpyzi@r.postjobfree.com

2

Faculty of Engineering, Chubu University

Matsumoto-cho 1200, Kasugai 487-8501, Japan

abpyzi@r.postjobfree.com, abpyzi@r.postjobfree.com

3

Department of Conputer Science, University of British Columbia

Vancouver, B.C. Canada V6T 1Z4

abpyzi@r.postjobfree.com

Abstract. This paper proposes a new method for extracting moving ob-

jects in high de nition video sequence. The proposed method can regard

shadows as background and gives the robust result to noise and inten-

sity changes because the method uses a* and b* components of CIELab

color space and extracts moving objects by background subtraction of

the estimated background image and the observed image. The computa-

tional cost is hugely increased by converting RGB color space to CIELab

color space and using high de nition video. The method addresses this

problem by using GPU and decreasing the transfer data from GPU to

CPU. Results are demonstrated with experiments on real data.

Keywords: Shadow Removal, Robustness to Illumination Changes,

Estimating Background Image, Background Subtraction.

1 Introduction

Real-time extraction of moving objects from video sequences is an important

topic for various applications of computer vision[1]-[3]. Applications include

counting the number of cars in tra c, observing tra c patterns, automatic de-

tection of trespassers, video data compression, and analysis of non-rigid motion.

To extract moving objects, there is a problem that shadows produced by ob-

jects are regarded as moving objects. This is one of the big problems to be solved.

The methods for shadow detection [4] [5] or object extraction by using chromatic

information [6] have been proposed. When the shadow detection process is added

to the method for extracting moving objects, it becomes di cult to handle the

whole process in real time. While, object extraction methods by using chromatic

information, to which RGB color space is converted easily, can handle the whole

process in real time. However, a simple background subtraction using chromatic

information has the limitation to address the intensity changes.

B. Apolloni et al. (Eds.): KES 2007/ WIRN 2007, Part II, LNAI 4693, pp. 1021 1028, 2007.

c Springer-Verlag Berlin Heidelberg 2007

1022 S. Fukui et al.

This paper proposes a new method for extracting moving objects. To han-

dle shadows, the method uses CIELab color space to which RGB color space

is converted. Only a* and b* components are used because both of them are

chromatic information and are unchanged approximately even when a shadow is

cast. We improve the method[7] to extract moving objects by using a* channel

and b* channel. The proposed method is robust to noise and nonlinear inten-

sity changes, such as the illumination changes or the automatic gain control

function of the camera. Although the problem in converting RGB color space to

CIELab color space takes heavy processing cost. The proposed method processes

in real time and solves this problem by using GPU. Results are demonstrated

on experiments with real video sequences.

2 Robust Background Subtraction for Illumination

Changes[7]

Let the RGB color values at (x, y ) in the original background image, BG, be

denoted by BG(x, y ) = (BGR (x, y ), BGG (x, y ), BGB (x, y )). Let those in the ob-

R G

served image at time t = 1, 2, . . ., be denoted by It (x, y ) = (It (x, y ), It (x, y ),

B

It (x, y )). The region with no moving objects at time t is de ned as the back-

ground region, At .

When the intensity of the whole image changes, for example owing to illumina-

tion changes or the automatic gain control function of the camera, It (x, y ) in At

di ers from BG(x, y ). In this case, moving regions cannot be extracted by simple

background subtraction. The method[7] proposed that the background image,

BGt, is estimated and then moving regions are extracted using the subtraction

of BGt and It .

2.1 Estimation of Present Background Image

Let (x, y ) and (x, y ) be di erent points in the image. Suppose, when the overall

R

intensity of the whole image changes, that the value of It (x, y ) is equal to that of

It (x, y ) and the value of BG (x, y ) is equal to that of BGR (x, y ). From this

R R

assumption, a conversion table from BGR (x, y ) to It (x, y ) is obtained based on

R

R R

the relation between BG (x, y ) and It (x, y ) in At . Conversion tables for the G

and B color channels are obtained in the same way.

Let the candidate for the background region be At . The procedure to get At

is as follows: First, the absolute value of the di erences between It and It 1 is

obtained. Second, the region with the largest di erences is selected by thresh-

olding. Third, At is calculated as the logical AND between the selected region

and the last estimated background region At 1 . Here, A0 is the entire image,

which is necessary to estimate BG1 .

The conversion table of each channel from BG(x, y ) to It (x, y ) is made by

the RGB values in the obtained region At . These tables convert the RGB color

values of BG to the estimated background image BGt . A histogram shown in

Figure 1 is produced from each set (It (x, y ), BGR (x, y )) in the region At to

R

make the conversion table for R channel.

Shadow Removal Method for Real-Time Extraction of Moving Objects 1023

Fig. 1. How to Make Histogram

The conversion table for R channel is obtained from the histogram as shown

in Figure 2. As shown in the histogram, at the value at BGR = 26 in Figure 2,

some pixels with the same value of BGR may not have the same value ofIt under

the e ect of noise. Therefore, the conversion table uses the median value of a

set of pixels at It . Since the median value of It becomes 11 in this example,

the set (26, 11) is obtained and added to the conversion table. The set consists

of all pixels which have the same value in BG(x, y ) as At . Then, the linear

interpolation is applied to the points that do not appear in the conversion table.

The conversion tables convert the RGB color values of BG to the RGB color

values of the moving region in It . As a result, the background, BGt, without

moving objects is estimated.

2.2 Object Extraction by Background Subtraction

After BGt is estimated, regions corresponding to moving objects are extracted

by background subtraction.

To get a more robust result, It is divided into blocks. Each block is recognized

as a background region or not. The total of the absolute values of di erences for

each block is calculated for each color channel. A block that has one (or more)

total below threshold is regarded as a block of an object.

The method changes the threshold value dynamically as the intensity of the

whole image changes. The histograms mentioned in 2.1 are used to determine

the threshold. The procedure for determining the threshold for an R value is

as follows: The range with 80% or more of frequency from the mean value is

considered. That value is adopted as the threshold for that R value.

The threshold for an R value that does not appear in the conversion table can

not be determined by the above procedure. In that case, the threshold for the

R value is set to be one smaller value than the R value. Thresholds for G and B

channels are determined in the same way.

1024 S. Fukui et al.

Value at BG R

Linear Interpolation

R

Value at BG = 26 ...

R

Value at I t

Frequency

11

Value at ItR

0

0

26 Value at BG R

Median = 11

Conversion Table

Fig. 2. Generating Conversion Table from Histogram

The threshold for each pixel is determined for each color channel. Next, the

sum of the absolute value of the di erences between BGt (x, y ) and It (x, y ) and

the sum of threshold values for each channel are calculated at each block. Finally,

blocks where the sums of the absolute value of the di erences become larger than

those of the thresholds of all channels are regarded as blocks in the background.

Otherwise, they are regarded as blocks in the moving object.

Each block is analyzed by the above process. Subsequently, a more detailed

outline of the moving object is extracted. For blocks that overlap the boundary

of a moving object, each pixel is classi ed into either the moving region or the

background region, as was the case for the whole block itself.

3 Handling of Shadow by Using CIELab Color Space

To extract the moving objects, one of the big problems to be solved is how to

handle shadows. In the case of using RGB color space, shadows are regarded as

moving objects by many methods of object extraction because RGB values of a

pixel in a shadow area di er from those of the same pixel in original background

image.

Extracting shadow area as moving objects causes the problems that two or

more target objects are misrecognized as one object or that shadow is detected

as moving object even though no moving object exists in the image.

To handle shadows, some methods which use the chromatic information, such

as H and S components of HSV color space, U and V components of YUV

color space etc., have been proposed[6]. The proposed method uses a* and b*

components of CIELab color space to handle the shadows because the method

using CIELab color space can give more appropriate results[8].

Shadow Removal Method for Real-Time Extraction of Moving Objects 1025

However, a simple background subtraction has the limitation for addressing

nonlinear intensity changes even though the chromatic information is used. The

proposed method improves the method mentioned in 2 to extract moving ob-

jects by using a* component and b*component. L* component does not be used

because L* value of a pixel in a shadow di ers from that of the same pixel

in BG.

Let the image that RGB values of each pixel in BG is converted to CIELab

values be denoted by BGLab, and let the image that RGB values of each pixel

in It is converted to CIELab values be denoted by ILab t .

The process of proposed method is as follows: BGLab is achieved in advance.

First, conversion tables for a* component and b* component are generated by

the same way mentioned in 2.1 after achieving BGLab t . Next, the background,

BGLab t, is estimated by BGLab and conversion tables. And last, moving objects

are extracted by using subtraction of BGLab t and ILab t .

4 Speed Up Using the GPU

The proposed method uses CIELab color space. Converting RGB color space to

CIELab color space costs heavily-loaded process. The method uses the GPU to

improve the processing speed and performance.

Recently, the performance of the Graphics Processing Unit (GPU) has been

improved to display more polygons in shorter time. The geometry and rendering

engines in a GPU are highly parallelized. Moreover, the shading process has

become programmable via what is called the Programmable Shader . Current

GPUs make geometry and rendering highly programmable.

GPUs are very powerful, specialized processors that can be used to increase

the speed not only of standard graphics operations but also of other processes[9].

A GPU is useful for image processing and can be programmed with a high-level

language like C.

While, a texture containing a image data should be created to use GPU. The

result needed by CPU should be rendered to a texture and CPU should use the

texture data. Also, a processing result of a pixel can not be used in a process for

other pixels. So, generating the conversion tables and determining the thresholds

are processed at CPU in the proposed method.

5 Experimental Results

In the experiments, a general purpose digital video camera was used as the input

device. White balance of the camera was set to indoor setting. A PC with Core

Duo T2500 and GeForce Go 7900 GS was used to extract moving objects. Each

target image frame was 720 480 pixels. The corresponding block size was set

to 10 10.

First, an outdoor scene is used as the original background.

Figure 3 shows the observed images and experimental results. Figure 3-(a)

shows the observed images and Figure 3-(b) shows the experimental results.

1026 S. Fukui et al.

(a)

(b)

(c)

(d)

(e)

Fig. 3. Observed Images and Results

The white region shows the detected moving object, and the black region corre-

sponds to the detected background region. Figure 3-(b) demonstrates that this

approach can extract the moving object with high performance and the shad-

ows are extracted as background. Figure 3-(c) shows the experimental results

produced by the previous approach [7]. It is shown that the approach[7] do not

address shadows. Figure 3-(d) and Figure 3-(e) shows the results produced by

the method improved the method mentioned in 2. The method producing Figure

3-(d) uses H component and S component of HSV and the method producing

Figure 3-(e) uses U component and V component of YUV. These results show

that CIELab color space is more appropriate one than HSV or YUV color space

to handle shadows.

In this experiment, the processing using the GPU required about 7.8 msec/

frame. In contrast, that without the GPU takes about 278.6 msec/frame. These

results show that using the GPU increases processing speed dramatically so

Shadow Removal Method for Real-Time Extraction of Moving Objects 1027

(a) (b) (c) (d)

Fig. 4. Observed Images and Results

that we can extract the moving object in real time. Incidentally, the method[7]

takes about 10.8 msec/frame. The proposed method can handle whole process

faster than the method[7]. Even though the process for converting RGB color

space to CIELab color space is heavily-loaded, it is run on the GPU and takes a

very short time in the proposed method. On the other hand, the process on the

CPU is decreased. The proposed approach needs two conversion tables and the

method[7] needs three conversion tables in contrast. These result in a increase

in processing speed.

Next, another experiment was done for indoor scene. The experiment is done

in a room illuminated by uorescent lamps. A uorescent lamp is turned o in

the middle of the image sequence, and then another lamp is turned o after a

while.

Figure 4 shows the observed images and experimental results. Figure 4-(a)

shows the observed images, which have di erent overall brightness from the orig-

inal background image and Figure 4-(b) shows the experimental results. These

results show that this approach can also extract the moving object with high

performance even under quick illumination changes. Figure 4-(c) shows the re-

sults produced by the method [7]. It is shown that the proposed approach works

better than the previous approach [7]. Figure 4-(d) shows the results by subtrac-

tion of a* and b* components of background image and those of the observed

images. It is shown that simple background subtraction can not extract moving

objects under illumination changes even if chromatic information is used.

6 Conclusion

A new approach is presented to extract the moving regions in a video sequence.

The approach uses CIELab color space to handle shadow areas produced by

objects. It estimates a* component and b* component of the present background

image and makes it possible to extract a moving object without shadow areas.

Moreover, the proposed method has robustness to quick intensity changes.

1028 S. Fukui et al.

Using CIELab color space increases the processing cost because converting

RGB color space to CIELab color space is heavily-loaded process. This problem

is solved and real-time implementation is done by using the GPU. We expect

the GPU to increase the speed of other image processing operations.

Cases of incorrect detection are remained. Future work includes the automatic

update of background image and addressing the small movement, such as swaying

trees or shake of curtain.

Acknowledgment

This work is supported by THE HORI INFORMATION SCIENCE PROMO-

TION FOUNDATION, the Grant-in-Aid for Scienti c Research No.16500108,

Japanese Ministry of Education, Science and Culture, and Chubu University

grant. Woodham s research is supported by the Canadian Natural Sciences and

Engineering Research Council (NSERC).

References

1. Stau er, C., Grimson, E.: Learning patterns of activity using real-time tracking.

IEEE trans. on PAMI 22(8), 747 757 (2000)

2. KaewTraKulPong, P., Bowden, R.: An improved adaptive background mixture

model for real-time tracking with shadow detection. In: Proc. of the 2nd European

Workshop on Advanced Video-Based Surveillance Systems (2001)

3. Sato, Y., Kaneko, S., Igarashi, S.: Robust Object Detection and Segmentation by

Peripheral Increment Sign Correlation Image. Trans. of the IEICE (in Japanese) J84-

D-II(12), 2585 2594 (2001)

4. Prati, A., Mikic, I., Trivedi, M.M., Cucchiara, R.: Detecting Moving Shadows: Al-

gorithms and Evaluation. IEEE Trans. on PAMI 25(7), 918 923 (2007)

5. Salvador, E., Cavallaro, A., Ebrahimi, T.: Cast shadow segmentation using invariant

colour features. Computer Vision and Image Understanding 95(2), 238 259 (2004)

6. Gonzalez, R., Woods, R.: Digital Image Processing. Addison-Wesley Longman Pub-

lishing Co. Inc., Redwood City,CA, USA (2001)

7. Fukui, S., Iwahori, Y., Itoh, H.: Robust Method for Extracting Moving Object to

Intensity Change Using Color Image Sequence (in Japanese). In: Proc. of Meeting

on Image Recognition and Understanding 2005, pp. 814 821 (2005)

8. Khan, E.A., Reinhard, E.: A Survey of Color Spaces for Shadow Identi cation. In:

ACM Symposium on Applied Perception in Computer Graphics and Visualization,

pp. 160 160. ACM Press, New York (2004) (abstract)

9. http://www.gpgpu.org/



Contact this candidate