BUILDING EXTRACTION FROM MULTIPLE DATA SOURCES: A DATA FUSION
FRAMEWORK FOR RECONSTRUCTION OF GENERIC MODELS
K. Khoshelham
Dept. of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong
*******.*@*****.***.**
Commission III, WG III/4
KEY WORDS: Building Extraction, Fusion, Automation, Modelling, Aerial image, Laser scanning, GIS
ABSTRACT:
Automated building extraction from multi-source data has attracted great attention in recent years. This paper presents an approach to
automated building extraction by fusion of image data, height data and 2D ground plans. In this approach buildings are detected
using ground plans and height data. A split-and-merge process is applied to fuse image and height data and derive the parametric
forms of roof planes. Vegetation regions are identified and discarded using the image information in red and infrared channels. Walls
are reconstructed as vertical planes upon the ground plan. The model planar faces are finally intersected and resulting plane patches
are assembled together to form a generic polyhedral model. Results of the experimental testing indicate the promising performance of
the proposed approach in automatic detection and reconstruction of buildings.
1. INTRODUCTION from noise, low contrast, shadow and occlusion; hence, features
extracted from images are incomplete and uncertain. Height
Automated building extraction has been a challenging problem data is of relatively low resolution, which makes the extraction
in the past two decades. Automated approaches that work solely of building boundaries difficult. These complexities have led
based on a single source of data, suffer from the lack of the research efforts toward methods that combine data from
robustness due to complexities in data as well as in buildings. multiple sources. In recent years a number of methods have
Therefore, in recent years research efforts have been focused on been developed for fusion of image and height data (Ameri,
automated approaches that make use of data from multiple 2000; Cord et al., 2001; Jaynes et al., 2003; Rottensteiner and
sources. This paper presents a framework for fusion of available Jansa, 2002), although height data in some of these methods has
data sources that can be used in an automated system for been generated from image data using matching techniques.
extraction of building objects. Fusion of image data and 2D ground plans has also appeared in
a number of works (Haala and Anders, 1996; Jibrini et al., 2000;
Various types of data from different sources have been used for Pasko and Gruber, 1996; Suveg and Vosselman, 2004). In
automated extraction of buildings. Aerial images are the most another fusion strategy, Haala and Brenner (1998) used DSM
widely used data. Single aerial images have been used for and 2D ground plans in an approach to automated building
automated detection and reconstruction of buildings with simple extraction.
models (Huertas et al., 1993; Lin et al., 1995; Lin and Nevatia,
1996; Nevatia et al., 1997; Shufelt and McKeown, 1993). For Despite the great deal of research that has been carried out on
reconstruction of more complex buildings, stereo and multiple automated building extraction, still the role of multi-source data
overlap aerial images have attracted greater attention (Baillard and fusion strategies has not been completely explored. The
et al., 1999; Bignone et al., 1996; Dang et al., 1994; Fischer et objective of this paper is to develop a framework for fusion of
al., 1998; Fua and Hanson, 1991; Henricsson, 1998; Henricsson available data sources that can be used for automated extraction
and Baltsavias, 1997; Herman and Kanade, 1986; Jaynes et al., of buildings. The proposed fusion framework combines aerial
2003; Kolbe, 1999; Moons et al., 1998). Image data from other images in colour and infrared channels, DSM and DTM from
sources have not been suitable for building reconstruction. aerial laser scanner and 2D ground plans from a GIS database in
Remotely sensed images from satellites are of relatively low an approach to automated extraction of buildings. In this
ground resolution and, therefore, can only be used for detection approach ground plans are used to detect buildings in the scene
of buildings. Close range images are, on the other hand, too and reconstruct the walls. Image data in red and infrared
detailed and can be used to map textures onto final channels are used to identify and remove vegetation regions.
reconstructed models. Roof planes are reconstructed by exploiting information from
image, DSM and DTM. Generic polyhedral models are finally
Height data is another widely used type of data. Digital surface formed by assembling reconstructed walls and roof planes.
models (DSMs) from aerial laser scanning systems have been
used in a number of approaches (Brunn and Weidner, 1997; The paper is structured in 6 sections. In section 2 an overview
Maas, 1999; Vosselman, 1999; Weidner and Forstner, 1995). of the proposed fusion strategy is presented. Reconstruction of
Range data from terrestrial laser scanners, however, has not walls and roof planes are described in section 3. Section 4
been proved useful for automated building extraction. discusses the reconstruction of generic models using a plane
patch reconstruction technique. Experiments and results are
Automated building extraction from both image and height data shown in section 5. Conclusions are made in section 6.
encounters a number of complexities. Image data often suffers
2. A FRAMEWORK FOR FUSION OF IMAGE DATA, 3.1 Localization of buildings using ground plans and height
HEIGHT DATA AND 2D GROUND PLANS data
The first step in a building extraction system is the detection of A 2D ground plan is usually stored as a polygon with an array
buildings in the scene. In this work, buildings are detected by of corner points with X and Y coordinates in the world
projecting ground plans to image data. For this purpose, the coordinate system. The footprint of each building is localized in
third dimension (height) of the ground plans is interpolated in the image by interpolating the height of every corner point of
the height data. the ground plan in DTM and projecting the resulting 3D corner
points to the image. Interpolation of heights in DSM with the
Having buildings detected in the scene, the reconstruction part same procedure helps to find roof boundaries of the building in
is based on finding parametric forms of the model planar faces. the image assuming that walls are vertical and there is no eave
These planar faces are then intersected and resulting plane overshooting. Concatenation of these two polygons (footprint
patches are assembled together to form a generic polyhedral and roof boundary) defines the actual area where the building
model. Roof planes are reconstructed by fusion of image and appears in the image.
height data through a split-and-merge process. This process
starts with segmenting the image data within the localized areas. 3.2 Reconstruction of roof planes using image and height
Height points from the DSM are projected to extracted image data
regions and a robust regression method is employed to fit planar
Reconstruction of roof planes is based on image regions
faces to height points belonging to each image region. Regions
extracted in areas where building candidates are detected.
in which more than a single plane is detected are split and
Extraction of image regions is carried out using watershed
neighbouring regions whose planes are coplanar are merged.
segmentation algorithm (Vincent and Soille, 1991). Extended
Vegetation regions are identified and discarded by computing
minima transform (Soille, 1999) is employed to control
an NDVI measure derived from red and infrared channels.
excessive oversegmentation.
Every planar face is attributed based on its slope and height
over the DTM. A planar face is attributed as non-roof if its
While a desirable segmentation is a partitioning of the image
height over the DTM is smaller than a minimum tolerance;
into regions where each region corresponds to a single planar
otherwise it is attributed as flat roof if its largest slope is smaller
face in object space, segmentation algorithms often result in
than a slope threshold or as slanted roof if the largest slope is
undergrown and/or overgrown regions. The purpose of the split-
larger than the slope threshold.
and-merge process is to refine the result of initial segmentation
by making use of clues derived from the DSM. For this purpose,
Wall faces are obtained by reconstructing a vertical wall over
height points are projected from DSM to extracted image
every line segment of the 2D ground plan. The average terrain
regions and a robust regression method is used to fit planar
height, derived from DTM, defines the planar surface that lies
faces to height points belonging to each image region. This
beneath the building. After the parametric forms of all planar
method is based on random selection of a finite set of samples
faces of the building are computed, every three planar faces are
from data (trial estimates) (Fischler and Bolles, 1981). Least
intersected and the resulting vertex is verified to make sure it is
median of squared residuals (Rousseeuw and Leroy, 1987) is
a correct model vertex. Verified vertices of each planar face are
used to find the best sample and also outlier points. Each sample
sorted in order to form a planar patch. Planar patches form the
contains three data points randomly selected from the DSM.
final generic polyhedral model that can be visualized using a
These points define a plane. For other points a residual value is
graphical engine.
calculated as to how they fit into this plane. The sample with the
least median of squared residuals is selected for outlier
The basic assumption in this approach is that buildings are
detection. Outliers are detected as points with residuals larger
formed by planar faces and that walls are vertical. In addition,
than a predefined tolerance and are treated as a new dataset to
building roofs are assumed to be one of the following three
determine whether they fit into a new plane. The plane fitting
types: flat roof, gable roof and hipped roof. In other words this
process is iterated until no more planes can be fitted to data
approach aims for reconstructing simple building types using a
points.
boundary-representation (B-Rep) modelling scheme.
Nevertheless, more complex buildings such as buildings with
After planar faces are detected in each image region, the
cross-gabled roofs can still be reconstructed by adopting a
segmented image is searched for regions in which more than
Constructive Solid Geometry (CSG) modelling scheme. In this
one plane is detected. Those regions are overgrown regions;
way, similar to the method developed by Suveg and Vosselman
hence, they are split into two or more regions depending on the
(2004), the 2D ground plan is first partitioned into rectangular
number of detected planes. To detect and merge undergrown
parts where each part is reconstructed using the plane patch
regions, first a region adjacency graph is constructed by
reconstruction method described above. These building parts are
tracking region boundaries in the segmented image. Plane
then combined together to form the final generic model.
parameters of every two neighbouring regions enter a
coplanarity check and the two neighbouring regions are merged
3. RECONSTRUCTION OF PARAMETRIC FORMS OF
if their associated planes are coplanar.
THE MODEL PLANAR FACES
Buildings are localized in the image using ground plans and An example of the performance of the split-and-merge process
height data. A split-and-merge process is applied to fuse image is demonstrates in figure 1. As can be seen in figure 1(B), the
and height data in the localized areas and derive the parametric initial segmentation results in an overgrown and an undergrown
forms of roof planes. Walls are reconstructed by finding the region in the right part of the roof. The result of the split-and-
parametric forms of vertical planes built on the ground plan. merge process is shown in figure 1(C) where the overgrown
The following sections describe the above processes in more region is split and two undergrown regions are merged to form a
details. correct roof region.
Therefore, each line segment of the ground plan with endpoints
p1 = ( x1, y1 ) T, p2 = (x2, y2 )T can solve for a 1, a 2 and define a
vertical plane as follows:
a1 x + a 2 y + a 3 z + a 4 = 0
where :
Figure 1: The split-and-merge process: A. The original image; a1 = y1 y (5)
B. The initial segmentation; C. The result of the
a 2 = x 2 x1
split-and-merge process.
a3 = 0
a 4 = x1 y 2 x 2 y1
The result of the split-and-merge process is more likely to be a
correct partitioning of the image where each region associates
with a single surface in object space. However, planar faces Parameters of the reconstructed wall faces are stored along with
might have been detected also in vegetation regions. To avoid two endpoints used in the calculations. Wall faces and planar
the influence of vegetation, image data in red and near infrared roof faces computed in the split-and-merge process enter the
channel is used to identify and discard vegetation regions. For plane patch reconstruction procedure described below.
this purpose a normalized difference vegetation index (NDVI) is
computed for each pixel from the following equation: 4. RECONSTRUCTION OF PLANE PATCHES
NIR RED
NDVI = A plane patch is defined as a planar polygon in 3D space. So
(1)
NIR + RED far, the computed model faces are represented in parametric
form. For graphical visualization, however, one requires plane
where NIR and RED denote pixel values in near infrared and red patches, which are represented by their vertices. Reconstruction
channels respectively. NDVI is related to the proportion of of plane patches is carried out in three steps: plane intersection,
photosynthetically absorbed radiation and its value varies from verification of vertices and sorting of vertices. In the following
1 to +1. Vegetation is characterized by high NDVI value and a these steps are described in more details.
region is identified as a vegetated region if at least 70% of its
pixels are vegetation pixels. 4.1 Plane intersection
To determine the final roof planes among the remaining regions Two planes in 3D space intersect in a line if they are not parallel
the difference between DSM height and DTM height for height or coplanar. This line will intersect a third plane in one point if
points belonging to each region is used. A planar surface is it is not parallel to it and does not lie in it. Therefore, in regular
attributed as non-roof if the difference between its average case three planes in 3D space intersect in a point if they are not
DSM height and DTM height is smaller than a minimum in a special relation to each other. In algebraic form, a system of
threshold; otherwise it is attributed as flat-roof if its largest three equations of the form denoted in eq. 2 has exactly one
slope is smaller than a slope threshold or as slanted-roof if the solution if the equations are linearly independent. More
largest slope is larger than the slope threshold. precisely if normal vectors of the three planes are linearly
independent then equations of the planes form a regular system
3.3 Reconstruction of vertical walls upon ground plans of three equations and three unknowns as denoted in eq. 6:
The parametric form of a plane passing through a point a11 x + a12 y + a13 z = k 1
and perpendicular to a normal vector
p = (x, y, z )T
a 21 x + a 22 y + a 23 z = k 2
p p p
(6)
n = ( a1, a 2, a3 ) T can be written as:
a 31 x + a 32 y + a 33 z = k 3
a1 x + a 2 y + a 3 z = k where aij are plane parameters and k i are constants. In order
(2)
to verify the linear independence of normal vectors, let:
where:
a11 a12 a13
= a 21 a 22 a 23
k = a1 x p + a2 y p + a3 z p (7)
(3)
a 31 a 32 a 33
For a vertical plane a 3 = 0 and the plane equation becomes:
The normal vectors of the three planes are linearly independent
if 0 in which case the set of equations 6 has a unique
a1 x + a 2 y = k solution that can be calculated using Cramer s rule (Pedoe,
(4)
1963):
where:
k = a1 x p + a 2 y p
x = x y = y z = z cross products are non-zero and have the same sign. Otherwise
if cross products are non-zero with different signs, the test point
where :
is outside the polygon. If any of the cross products is zero then
k1 a12 a13 the test point is determined to fall on a polygon side or on the
x = k 2 a 22 a 23 extension of a polygon side. The distinction between the two
cases is made by checking the dot product of the corresponding
k 3 a32 a 33
two vectors (which have a zero cross product). For a point on a
side of the polygon, one of the cross products is zero and the
a11 k1 a13 (8)
corresponding dot product is negative, while for a point on the
y = a 21 k 2 a 23 extension of the polygon side the dot product is positive (figure
3).
a31 k 3 a 33
a11 a12 k1
z = a 21 a 22 k 2
a31 a32 k 3
Every three planar faces of the model are intersected and the
intersection point, if there is one, is stored as a vertex for each
of the three faces.
r r
4.2 Verification of vertices
B C D
A
V1 V2 rr rr
rr rr rr rr
V1 V2 V1 V2 V1 V2 V1 V2
V1 o V2 V1 o V2
Intersection of model faces may generate incorrect vertices.
Figure 2(B) shows an example of an incorrect vertex generated
from the intersection of three model faces. All generated + - - - + +
P1 P2
vertices, therefore, have to be verified, in order to identify and
+ + 0 + 0 -
P2 P3
remove incorrect ones. To identify incorrect vertices, two
constraints are used as follows: + + + + + +
P3 P4
+ + + + + +
P4 P1
Constraint 1: A valid vertex lies either in or under any roof
plane.
: Cross Product
Recall that each building part is assumed to have one of the
o
three presumed roof types: flat, gable and hipped. For these : Dot Product
building types, all model vertices lie either in or under any roof
Figure 3: Four possible positions of a point with respect to a
plane. This property allows us to verify model vertices and
polygon. A. Point inside the polygon: all cross
identify invalid ones. The verification is carried out by
products are non-zero and have the same sign; B.
evaluating the function of the roof plane with the coordinates of
Point outside the polygon: cross products are non-
the vertex of interest. This will result in zero if the vertex lies in
zero but have different signs; C. Point on the
the roof plane, otherwise the sign of the resulting value and the
extension of a polygon side: a zeros cross product
direction of the plane normal vector determines whether the
with a positive dot product of the corresponding
vertex point is above or under the roof plane. Every vertex is
vectors; D. Point on the polygon side: a zero cross
verified against all roof planes and is removed if it is higher
product but the dot product of the corresponding
than any of roof planes.
vectors is negative.
A correct vertex is one that satisfies both constraints. An
incorrect vertex will fail to satisfy one or both constraints and
will be removed from corresponding planar faces.
4.3 Sorting of vertices
For graphical visualization of the reconstructed model, vertices
of each planar face must be given in order. To sort vertices a
simple algorithm is used, which is based on forming vectors
from the centre of gravity of vertices to each vertex and finding
Figure 2: Intersection of model faces may generate the angle between each vector and a starting vector (figure 4).
r
r
incorrect vertices. A. A correct vertex; B. An The angle between vectors u and v is given by the dot
incorrect vertex. product:
rr
Constraint 2: A valid vertex projects on or inside the u ov
cos = r r (9)
polygon of the ground plan.
uv
This constraint is based on the assumption that roof eaves do
not overshoot walls. Figure 3 illustrates the procedure to verify
Since cosine function returns the same value for, the sign of
whether a test point is on or inside a polygon. First vectors are
the cross product between the two vectors is used to determine
formed from the test point to every polygon vertex and then
the direction of the angle. The algorithm starts with an arbitrary
cross product of every two adjacent vectors are computed. The
vertex and sorts other vertices with respect to the angle of their
test point is determined inside the polygon, if all computed
vectors with the starting vector. The details of the algorithm is
as follows:
Algorithm: Sort vertices
Select an arbitrary vertex as the starting vertex;
Find the center of gravity of remaining vertices;
Form vectors from the center of gravity to each
vertex;
Compute the angle between each vector and the
starting vector (eq. 9);
Find the sign of the cross product of each vector and
the starting vector;
Figure 5: Colour aerial image of the selected scene.
For any vector with a negative sign of the cross
product change the angle to 2 .
Sort the vertices with respect to their angles.
Sorted vertices of model faces form the planar patches. The
generic polyhedral model is reconstructed once all its planar
patches are formed.
Figure 6: The DSM of the selected scene.
Figure 4: sorting the vertices of a planar face.
5. EXPERIMENTAL RESULTS
Figure 7: The DTM of the selected scene.
The image data used in the experiment is an orthorectified aerial
image of 0.5m ground resolution acquired in RGB and NIR
channels. Figure 5 shows the RGB image of the selected scene.
Height data consists of a last echo DSM and a DTM in regular
grid format with 1.0m resolution acquired using LIDAR (light
detection and ranging) system. Figure 6 and figure 7 depict the
DSM and the DTM respectively in grey levels where a brighter
level stands for a higher altitude. Ground plans of the buildings
in the scene were manually digitised from the image data and Figure 8: The ground plans manually digitised from
are shown in figure 8. Using the information in red and near the image.
infrared channels the NDVI measure was computed for each
image pixel. The resulting NDVI map is shown in figure 9
where high NDVI values (vegetation pixels) are depicted in
yellow and red colours.
Buildings were localized in the image and the split-and-merge
process was applied to localized areas to reconstruct the roof
planes. Image regions with NDVI values higher than 0.35 were
identified as vegetation and were removed from the process.
Figure 10 shows the results of the split-and-merge process as
applied on the images of the three buildings in the scene. As can
be seen in figure 10(B), the initial segmentation results in
undergrown and overgrown regions in all cases. Figure 10(C),
Figure 9: The NDVI map computed from image data in
however, shows that these undergrown and overgrown regions
red and infrared channels.
are successfully split and merged respectively.
The plane patch reconstruction technique was applied to approach, however, detection of buildings using ground plans
parametric forms of roof planes computed in split-and-merge and height data is straightforward and guarantees that other
process and wall planes reconstructed upon ground plans. objects will not be reconstructed as buildings.
Figure 11 shows the generic polyhedral models reconstructed by
assembling plane patches. The focus in the present work was on the reconstruction of
simple roof types. Future research will target more complex
roof types by adopting a CSG modelling scheme. This will
concern finding all possible ways to partition the ground plan
and select the correct one based on the number, parameters and
attributes of reconstructed roof planes. Building parts will then
be reconstructed using the approach described in this paper.
ACKNOWLEDGEMENT
The work described in this paper was supported by a grant from
the Hong Kong Polytechnic University (Project No. G-W122).
Author would also like to thank TopoSys Topographische
Systemdaten GmbH for providing the dataset used in the
experiment.
REFERENCES
Ameri, B., 2000. Automatic recognition and 3D reconstruction
of buildings from digital imagery. PhD Thesis, Stuttgart
University, Stuttgart, 110 pp.
Baillard, C., Schmid, C., Zisserman, A. and Fitzgibbon, A.,
1999. Automatic line matching and 3D reconstruction of
buildings from multiple views, ISPRS conference on automatic
extraction of GIS objects from digital imagery, IAPRS Vol.32,
Part 3-2W5, pp. 69-80.
Figure 10: Results of the split-and-merge process for Bignone, F., Henricsson, O., Fua, P. and Stricker, M., 1996.
reconstruction of roof planes. A. Original Automatic extraction of generic house roofs from high
images; B. Initial segmentations; C. resolution aerial imagery, ECCV '96: 4th European Conference
Reconstructed roofs shown as image regions. on Computer Vision, Cambridge, UK, April 15-18, pp. 85-96.
Brunn, A. and Weidner, U., 1997. Extracting buildings from
digital surface models, ISPRS Workshop on 3D Reconstruction
and Modelling of Topographic Objects, Stuttgart, pp. 27-34.
Cord, M., Jordan, M. and Cocquerez, J.-P., 2001. Accurate
building structure recovery from high resolution aerial imagery.
Computer Vision and Image Understanding, 82: 138-173.
Dang, T., Jamet, O. and Maitre, H., 1994. Applying perceptual
grouping and surface models to the detection and stereo
reconstruction of building in aerial imagery. In: H. Ebner, C.
Heipke and K. Eder (Editors), ISPRS Symposium on Spatial
Information from Digital Photogrammetry and Computer
Vision, Munich, pp. 165-172.
Figure 11: The reconstructed models.
Fischer, A. et al., 1998. Extracting buildings from aerial images
using hierarchical aggregation in 2D and 3D. Computer Vision
6. CONCLUSION and Image Understanding, 72(2): 185-203.
A framework for fusion of image data, height data and 2D Fischler, M.A. and Bolles, R.C., 1981. Random sample
ground plans was presented in this paper. This fusion consensus: a paradigm for model fitting with applications to
framework was used in an approach to automated detection and image analysis and automated cartography. Communications of
reconstruction of buildings. The relative simplicity of this the ACM, 24(6): 381-395.
approach suggests that fusing data from multiple sources can
Fua, P. and Hanson, A.J., 1991. An optimization framework for
greatly reduce the complexity of the problem. Using generic
feature extraction. Machine vision and applications(4): 59-87.
models is advantageous since a larger number of buildings can
be modelled. Nevertheless, successful reconstruction of generic
Haala, N. and Anders, K., 1996. Fusion of 2D GIS and image
models heavily depends on the results of detection. This is
data for 3D building reconstruction, International Archives of
because these models are not sufficiently specific and other
Photogrammetry and Remote Sensing, vol. XXXI, Part B3, pp.
objects might as well be reconstructed as buildings. In this
285 290.
Haala, N. and Brenner, C., 1998. Interpretation of urban surface Conference on Computer Vision, Freiburg, Germany, pp. 410-
425.
models using 2D building information. Computer Vision and
Image Understanding, 72(2): 204-214.
Nevatia, R., Lin, C. and Huertas, A., 1997. A system for
Henricsson, O., 1998. The Role of Color Attributes and building detection from aerial images. In: A. Gruen, E.
Similarity Grouping in 3-D Building Reconstruction. Computer Baltsavias and O. Henricsson (Editors), Automatic extraction of
Vision and Image Understanding, 72(2): 163-184. man-made objects from aerial images (II). Birkhauser Verlag,
Basel, pp. 77-86.
Henricsson, O. and Baltsavias, E., 1997. 3-D building
reconstruction with ARUBA: A qualitative and quantitative Pasko, M. and Gruber, M., 1996. Fusion of 2D GIS data and
evaluation. In: A. Gruen, E. Baltsavias and O. Henricsson aerial images for 3D building reconstruction, International
(Editors), Automatic extraction of man-made objects from aerial Archives of Photogrammetry and Remote Sensing, vol. XXXI,
images(II). Birkhauser Verlag, Basel, pp. 65-76. Part B3,, pp. 461-468.
Herman, M. and Kanade, T., 1986. Incremental reconstruction Pedoe, D., 1963. A geometric introduction to linear algebra.
Wiley, New York, 224 pp.
of 3D scenes from multiple complex images. Artificial
Intelligence(30): 289-341.
Rottensteiner, F. and Jansa, J., 2002. Automatic extraction of
Huertas, A., Lin, C. and Nevatia, R., 1993. Detection of buildings from LIDAR data and aerial images, International
buildings from monocular views of aerial scenes using Archives of Photogrammetry and Remote Sensing, Volume
XXXIV / 4, pp. 569-574.
perceptual organization and shadows, ARPA image
understanding workshop, Washington, DC, pp. 253-260.
Rousseeuw, P.J. and Leroy, A.M., 1987. Robust regression and
outlier detection. John Wiley & Sons, New York, 329 pp.
Jaynes, C., Riseman, E. and Hanson, A., 2003. Recognition and
reconstruction of buildings from multiple aerial images.
Shufelt, J. and McKeown, D., 1993. Fusion of monocular cues
Computer Vision and Image Understanding, 90(1): 68-98.
to detect man-made structures in aerial imagery. CVGIP: Image
Understanding, 57(3): 307-330.
Jibrini, H., Paparoditis, N., Pierrot-Desseilligny, M. and Ma tre,
H., 2000. Automatic building reconstruction from very high
Soille, P., 1999. Morphological image analysis: principles and
resolution aerial stereopairs using cadastral ground plans, 19th
applications. Springer, Berlin, 316 pp.
ISPRS Congress, Amsterdam, The Netherlands.
Suveg, I. and Vosselman, G., 2004. Reconstruction of 3D
Kolbe, T.H., 1999. Identifikation und Rekonstruktion von
building models from aerial images and maps. ISPRS Journal of
Geb uden in Luftbildern mittels unscharfer Constraints. PhD
Photogrammetry and Remote Sensing, 58: 202-224.
Thesis, University of Vechta, Vechta, 149 pp.
Vincent, L. and Soille, P., 1991. Watersheds in digital spaces:
Lin, C., Huertas, A. and Nevatia, R., 1995. Detection of
an efficient algorithm based on immersion simulations. IEEE
buildings from monocular images. In: A. Gruen, O. Kuebler and
transactions on pattern analysis and machine intelligence, 13(6):
P. Agouris (Editors), Automatic extraction of man-made objects
583-598.
from aerial and space images. Birkhauser Verlag, Basel, pp.
125-134.
Vosselman, G., 1999. Building reconstruction using planar
faces in very high density height data, ISPRS Conference on
Lin, C. and Nevatia, R., 1996. Buildings detection and
Automatic Extraction of GIS Objects from Digital Imagery,
description from monocular aerial images, ARPA Image
Munich, pp. 87-92.
Understanding Workshop, Palm Springs, CA.
Weidner, U. and Forstner, W., 1995. Towards automatic
Maas, H.G., 1999. Closed solutions for the determination of
building extraction from high resolution digital elevation
parametric building models from invariant moments of airborne
models. ISPRS Journal of Photogrammetry and Remote
laserscanner data, ISPRS Conference on Automatic Extraction
Sensing, 50(4): 38-49.
of GIS Objects from Digital Imagery, Munich, pp. 193-199.
Moons, T., Frere, D., Vandekerckhove, J. and Van Gool, L.,
1998. Automatic modeling and 3D reconstruction of urban
house roofs from high resolution aerial imagery, European