Abstract

In this work, we describe multi-layered analyses of a high-resolution broad-area LADAR data set in support of expeditionary activities. High-level features are extracted from the LADAR data, such as the presence and location of buildings and cars, and then these features are used to populate a GIS (geographic information system) tool. We also apply line-of-sight (LOS) analysis to develop a path-planning module. Finally, visualization is addressed and enhanced with a gesture-based control system that allows the user to navigate through the enhanced data set in a virtual immersive experience. This work has operational applications including military, security, disaster relief, and task-based robotic path planning.

© 2013 Optical Society of America

1. Introduction

Recently, LADAR systems [1] have been developed that are capable of acquiring high-resolution elevation data (transverse linear resolution: 30 cm) over a broad area (thousands of km2). Figure 1 (left) displays the remarkable detail of a LADAR data set generated by the ALIRT (Airborne Ladar Imaging Research Testbed) system [1], with the Eyeglass software [2]. The 3D profile of individual buildings can be clearly seen. Figure 1 (right) shows the corresponding satellite imagery.

 

Fig. 1 (left) High-resolution LADAR data. Image displayed with Eyeglass software package [2]. (right) Satellite imagery of the same region as on the left. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

In this work, we explore computational analysis of the high-resolution ALIRT LADAR (hereafter referred to as the LADAR data) data set for feature extraction and building classification. Results of the analysis are inserted into layers on a GIS (geographic information system). This approach allows us to display, or fuse, LADAR data and LADAR-derived data with visible data. The LADAR data and the visible data set that we used were both aligned to absolute latitude and longitude coordinates, so that such fusion was possible. We should note that we use the term “alignment” in a loose sense; to the eye, the LADAR and visual data sets line up well. Precise geo-registration of such data sets is an active area of research [3], including algorithms to account for variations in scale and lens distortion. An advantage of using GIS is that other types of data, such as road data, can be integrated into the system.

In the first section, we describe processing of the LADAR data to extract features, including building, cars, and trees. Once a feature is identified, parameters of the feature such as height and area can be computed. Analysis of the distribution of buildings and their heights can reveal population distribution and demographic information. Since we extract significant features from the LADAR data set, the resultant data set is significantly compressed. For example, instead of needing to know the vertical profile of every feature, the user might only be interested to learn building locations. Based on building locations, we can identify neighborhoods and dense population areas. We developed Matlab tools to query the GIS system [4].

Next, we investigate path-planning algorithms, to find the best path through a region. The LADAR data allow us to find the local slope, so we can determine the speed that a human or vehicle would travel across the region. We could also identify impassable regions, such as steep mountains or buildings. This analysis also used the road data, which we had loaded as a layer in the GIS. So the fastest path can be found, and then displayed in the GIS.

For military users, the fastest path may not necessarily be the best path. A soldier may want to operate covertly, and walk through areas where the terrain obscures him from view. We did a line-of-sight (LOS) analysis to determine which regions were obscured, and which regions were more visible. Then, we modified the path-planning method to find a route that was fast, but also covert.

We also investigated how the user can interact with the GIS. Using a gesture-control system, the user can move around the data set, rotate the data set, and turn layers on and off. This gesture-control system presents a very natural interface and allows the user to explore the data set without taking their attention away from the data.

Related work with LADAR data can be found in the literature. In [5], the authors integrate LADAR data with optical imagery and investigate fusion algorithms. They also show fusion with a GIS system, although in a somewhat different way than shown in this work. The work of [6] demonstrates detection of buildings from a ground-based LADAR system, and the work of [7] demonstrates methods for target recognition based on LADAR data. In [8], the authors use GIS data for UAV and UGV navigation. In [9], the author shows the use of a GIS tool (GloVis) that integrates data from multiple sensors, to illustrate geographic concepts. This work differs from these previous papers in that we analyze the LADAR data over a broad spatial region, autonomously classify and analyze the data, and show path-planning analysis.

The path planning tools and building identification algorithms shown in this paper have military (identify strategic locations), civilian (disaster relief, identify population distributions) applications and robotic (UAV, UGV) path planning applications. In the next section, we describe GeoFetch, the GIS tool used in this work. In Section 3, we focus on the classification algorithms and data products that we inserted into the GeoFetch tool. Section 4 describes path planning and LOS tools, and Section 5 shows the use of gesture control to manipulate the virtual globe.

2. Geofetch

The data shown in this paper were taken over Haiti, shortly after the 2010 earthquake. The data are displayed in GeoFetch, a GIS virtual globe developed by Lincoln Laboratory. GeoFetch is built on the NASA Worldwind [10] platform, an open source GIS tool. We added features to GeoFetch that allow us to easily switch layers, to import visual imagery, and to make external calls via a UDP server. We also added the capability to GeoFetch to use the LADAR data for the digital elevation data, so that the GIS tool has much higher resolution elevation data than typically found in GIS tools. Having the digital elevation data allows the user to precisely measure the height at every position, and to inspect the 3D model from different angles.

In Fig. 2(a), we show an example of GeoFetch displaying satellite imagery mapped onto the LADAR data. The imagery shows the damage to the National Palace, where the President of Haiti lives. The damage to the front of the palace as well as the middle section can be seen as discontinuities in the LADAR imagery. Figures 2(b) and 2(c) show photographs of the palace that were taken at the time, for comparison.

 

Fig. 2 The National Palace in Port-Au-Prince, Haiti was destroyed during the earthquake. (a) LADAR imagery with satellite imagery wrapped over it. Satellite imagery credit: DigitalGlobe (b) Aerial imagery. Image credit: Logan Abassi/UNDP, licensed under Creative Commons License. (c) Photograph from the ground. Image credit: Logan Abassi/UNDP, licensed under Creative Commons License. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

3. Building classification analyses

The LADAR data are initially represented by a point cloud, and then the point cloud is further processed into a DEM (digital elevation model) with 30 cm transverse resolution [1] and less than 7.5 cm cross-range resolution [13]. The LADAR data have much higher resolution than the DEMs that are typically used in GIS programs. These LADAR data sets are notable not only for their high resolution, but also because the data sets span regions of thousands of km2. We developed a suite of algorithms to support the identification and classification of objects of interest in the LADAR data, consisting of the main functions: segmentation and ground plane estimation, feature calculation, and object classification. We show results from applying these algorithms, including inserting the classified features into a GIS tool, and we quantify the algorithm performance. Finally, buildings are classified into neighborhoods, which can themselves be characterized.

3.1 Segmentation and ground plane estimation

The first step in the processing was the segmentation of a region into distinct objects. Edges between objects of interest are identified by height changes across neighboring pixels (elevation derivatives) that exceed a fixed threshold. Separate regions isolated by edge boundaries are culled by a set of morphological rules and then preliminarily labeled as separate objects, with the object of largest area labeled as the ground. Mean filtering is applied to produce a smoothed estimate of the ground plane over the entire region, enabling the calculation of the height above ground for each pixel. In Fig. 3, we show a region of Haiti of size approximately 1.5 km2. Figure 3 (left) is colored according to the height of each pixel. In Fig. 3 (right), we have segmented the region, and the colors indicate different objects in the scene.

 

Fig. 3 (left) The region is colored according to height. (right) The region has been segmented, and the colors indicate different objects. Analyses based on LADAR data. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

As a preconditioning step to aid the subsequent object classifying routine, a refining set of connectivity rules is reapplied to the ground-debiased data in order to cluster or further segment preliminary shapes into whole, classifiable physical objects. For example, a small, relatively regular shape abutting a larger shape of similar characteristics along a significant common edge is likely part of the same physical object. On the contrary, a low depression region with appropriate characteristics engulfed in a larger shape would be dismissed as a courtyard and excised accordingly from the final object database. On these refined objects, an object classifying routine is next executed.

3.2 Feature extractions and object classification

Through much experimentation across data sets with diverse terrain, we developed a sufficient collection of 16 features to characterize each object, with the primary objective of separating man-made objects such as buildings from natural objects such as vegetation, as well as eliminating false natural object detections commonly occurring in rugged mountainous regions. Feature categories included (i) summary statistics on object dimensions, such as mean and variance, (ii) two-dimensional shape descriptors, such as area, perimeter, and eccentricity, and (iii) surface smoothness estimators, such as least squares plane fit error. An initial attempt to intuitively specify static, fixed, heuristic classification rules using these features proved unsatisfactory, so a supervised learning approach was adopted.

To develop a training set of labeled truth data, the LADAR-derived features were displayed on top of GIS data, in the GeoFetch virtual globe. Human expertise was employed to mark data sets with putative truth categories including clutter, trees, and man-made structures, primarily by consulting the corresponding visible data. The marked data were then used to train a supervised classifier. We selected the random forest classifier [11, 12] for supervised classification. The random forest algorithm begins with binary decision trees, comprising a set of decisions. At each branch point, the evaluated decision governs whether the tree is traversed to the left or right, and ultimately which categorical leaf node is reported. For example, a branch point could have the test and response: “is the area larger than 5 m2? If yes, then go to the left branch; if no go to the right branch”. If the decision tree is well designed, then each path in the decision tree leads to a distinct class of object, with dependencies implicit in the structure and order of the tree. Provided that the algorithm is supplied with the classified groups and their associated features, such a decision tree can be constructed automatically through entropy minimization techniques.

In the random forest approach, numerous parallel decision trees are constructed, each with a different subset of features. For example, one decision tree may be limited to considering just the area of a region and the height variance, while another decision tree may encompass only surface smoothness and perimeter. The features associated with each decision tree are allocated randomly, with the collection of all the trees is referred to as the random forest. After all decision trees are optimized using the supervised data, then the trees can be used to classify data on which the system has not been trained. New data will be evaluated according to each decision tree, with a majority vote exercised to elect the final object class designation. The random forest has proven elsewhere and in our study to be one of the more effective classifier algorithms.

To further enhance accuracy, an interactive interface was included to permit a human analyst to correct any perceived misclassifications made by the algorithm; accordingly, the classifier would then be automatically retrained, with updated predictions available within seconds. After a sizable database of human-labeled fiduciary data sets had been produced, the algorithm was finally run over the entire LADAR data set, comprising thousands of km2.

To demonstrate the algorithms, we show examples of the classified data (where classification refers to the algorithm and not a security indicator) overlaid onto the GIS data (data from Port-au-Prince, Haiti). Figure 4 shows a northwest region of the city. Some of the navigation tools from the GeoFetch GIS are visible at the bottom left of this screenshot. We analyzed a large geographical region, over which the user could rapidly zoom or scan to locate areas of interest. The LADAR-derived features and visible data are mis-aligned by approximately 3 m. The mis-alignment is small enough so that overlaying two data sets is still informative, yet large enough so that it is clear that the alignment is imperfect. In other data sets, we have seen alignment between the LADAR and visible imagery to within 1 m, so there is variation in the alignment quality. Buildings are colored blue, and the saturation of the building color indicates the building height. The cyan lines in Fig. 4 are roads, imported from the OpenStreetMaps [14] database. This use of OpenStreetMaps illustrates how we can meld various and distinct sources of data in the GIS tool.

 

Fig. 4 Snapshot from GeoFetch tool. Roads (cyan), buildings (blue), tall buildings (red) are shown in this image. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

Tall buildings are often of particular interest; the building height may indicate increased population density, or that the buildings have religious, business, or military significance. Since height is a feature of each building in our database, we can filter by selecting buildings that are higher than some threshold; in this paper, we defined tall buildings as taller than 7 m, and these tall buildings are colored red. As an example of tall buildings, in Fig. 4, a complex of liquid storage tanks appear at the top of the image. Even at broad zoom levels, these storage tanks stand out due to the red coloring. The classifier also recognizes walls as a separate class from buildings. Vegetation, mountainous features, cars and the catch-all category of “other” round out the main categories of for which the classifier is responsible. Roads, water and other previously satellite-surveyed categories are handled external to the classifier.

In Fig. 5 (left), we show the Haiti National Penitentiary, which is located in the heart of Port-au-Prince (the capital of Haiti). The threshold-exceeding structures within the Penitentiary grounds, automatically highlighted in red, may indicate guard towers. In Fig. 5 (right), we show another snapshot, in a dense urban region of Port-Au-Prince.

 

Fig. 5 (left) The lower right of this image shows the Haiti National Penitentiary. (right) Another area in Port-au-Prince. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

3.3 Classifier performance analysis

In this section, we study the performance of the building classifier. Since we did not have the absolute truth data as to whether objects were buildings or not, we compared the automated classifier with the results from a human observer, taking the human observer as “truth”. The classifier clearly does well in areas where objects are well defined, large, and separated, as in the upper part of Fig. 4. Therefore, we chose a more challenging urban area to consider. Figure 6 (left) shows the satellite imagery for a region in Haiti, and in Fig. 6 (right) we have overlaid the buildings that were identified by the automated classifier from the LADAR data onto the satellite imagery.

 

Fig. 6 (left) Satellite imagery of a block in Haiti. (right) The automated classifier has identified buildings and overlaid them upon the satellite imagery from Fig. 6 (left). The buildings are randomly colored so that close buildings can be distinguished. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

  • • In a complex urban region, we compared the performance of a human versus the automated classifier, when both human and classifier were given only the LADAR data. The classifier correctly identified 100% of the buildings identified by the human. The classifier identified an additional 6 buildings that the human identified as trees or ground clutter, giving a comparative false alarm rate of 12%.
  • • In the same complex urban region, we compared the performance of a human versus the automated classifier, where the human was presented with the satellite imagery, and the classifier still considered only the LADAR data. The classifier identified 88% of the buildings identified by the human, and additionally the classifier had a 20% false alarm rate.

The automated classification algorithm could be further improved, for example, a more sensitive detection of trees and irregular roof profiles, to achieve a lower false alarm rate. Human performance suggests that fusing the LADAR data with the satellite data would also improve the classifier performance.

For many LADAR applications, the analysis goal would not cease at simply classification, but would build upon it with higher-level filtering queries. For example, find a region with tall buildings. Or, find regions with buildings that may be prone to flooding, due to the local geography. For these analyses, the performance reported above may be sufficient.

3.4 Building clusters and Matlab tools

Using the results of our building analysis, we can computationally associate clusters of buildings, or neighborhoods. We define a neighborhood as a group of buildings such that each building is within a distance D of the other surrounding buildings in the neighborhood. The value D is defined as a fraction of the mean value between buildings in the region. Statistical values for average spacing between buildings within regions were calculated. The average spacing between buildings was then used to determine a dilation factor applied to detected structures, to compute perimeters around the structures. Regional islands are formed by merging the structure perimeters, thereby generating neighborhoods, as shown in Fig. 7.

 

Fig. 7 The green regions represent clusters of buildings, or neighborhoods. Feature analysis based on LADAR data. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

 

Fig. 8 When the user clicks on a building or region in the GIS tool, a window pops up (lower right) with detailed information about the object that the user clicked on. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

4. Path planning and line-of-sight analyses

4.1 Discussion of A* path-finding algorithm

In this section, we describe path planning and LOS analysis, with a focus on military applications. To find the fastest path, we apply a modified A* algorithm, based on the A* algorithm [15], that is extensively used in path planning and graph traversal.

To compute the distance from the start-point to the end-point, the A* algorithm progressively builds an approximation to a global cost estimate (termed the f-value) based on a local, incremental cost criterion. The cost associated with the start-point is assigned based on the Manhattan distance from the start-point to the end-point (the Manhattan distance is the distance between two points- not in a straight diagonal, but in a grid-like path). Then, the cost associated with the nearest neighbors is computed. The cost of a node (the f-value) is determined by the cost from the start-point (referred to as the g-value) to that node, added to the cost from the node to the end-point (referred to as the h-value). At each evaluated point, the algorithm greedily [16] chooses the neighbor or neighbors with the lowest f-value, and proceeds to evaluate the cost of its neighbors. In this way, the algorithm proceeds until the lowest-cost path from the start to finish is found. Once the lowest-cost path is found, then the algorithm retraces each node to its parent, and thus determines the sequence of nodes corresponding to that path.

Nodes that were not elected (referred to as discarded nodes) have a longer path length than the lowest-cost path. Thus, A* is admissible (optimal) as it is guaranteed to return the shortest path, since discarded nodes are known to have a higher cost than the path that the algorithm selects. The caveat to the previous statement is that to apply the A* algorithm we must sample the terrain, generating a finite set of nodes to be considered. So the A* algorithm generates the shortest path, within the set of considered nodes.

In our work, to decrease the required computational time, a modified A* algorithm was employed that further pruned the considerable set of possible nodes. In the pruning process, previously evaluated points that advanced the solution closer to the destination were favored over those that promoted retrograde progress, which would be retired from regard earlier, apart from any other cost constraint. Another modification that we made to the A* algorithm was to take into account other factors to determine path cost, such as terrain steepness and visibility. Our modified A* algorithm executes quickly, taking only a few seconds to compute for the examples shown below.

4.2 Path finding incorporating LADAR-derived traversability

In this section and the following, we demonstrate how we incorporated traversability and line-of-sight calculations into the path-finding algorithm. First, we show the combination of the path-finding algorithm with traversability (as computed from the LADAR data). A direct path threading up a steep and rugged mountain incline might be considerably slower than a circuitous path that skirts the base of a mountain. Such terrain-based path finding can be thought of as an extension of the familiar GPS navigator, which finds the fastest road-based driving route. Using terrain traversability, we can now find the fastest walking route using one traversability rating, the fastest driving route through areas without roads using another traversability rating, and the fastest on-road route using yet another traversability rating. User control is allowed to designate areas over regions where, for example, roads could be favored while the remainder of the trip could be planned with the stricter energy and LOS criteria.

We analyzed terrain traversability, using terrain slope and estimated travel speed, derived from the LADAR data. Figure 9 (left) shows estimated travel speed of a human and a vehicle, as a function of terrain slope. Figure 9 (middle) shows the elevation data for a sample region containing a building, and then Fig. 9 (right) shows the calculated traversability for a north-going vehicle through the region shown in Fig. 9 (middle). The buildings (identified from the algorithms shown previously) are added into the traversability calculation, as “no-go” zones; otherwise the algorithm may find a path through a building. We also include the roads database in the calculation, by setting a higher speed for travel on roads.

 

Fig. 9 Traversability analysis. (left) Assumed walking/driving speeds as a function of terrain slope. (right) Calculated velocities and go/no-go regions over a small region. Feature analysis based on LADAR data. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

In Fig. 10 (left), we show a fast path (incorporating traversability) for a walking man to get between points A and B, over a sample patch of LADAR data. Note that the path does use the roads to minimize travel time. The traversability map turns out to be rather mundane, because the region we chose for analysis was relatively flat. In this case, the buildings are no-go zones, and the non-road terrain has an associated velocity of ~5 km/hr. We weighted the velocity improvement associated with roads by a factor of 2 (for a maximum of 10 km/hr) to account for the fact that walking on a road would probably be much faster than walking on dirt or brush. These path analyses could also be displayed using the GeoFetch GIS tool.

 

Fig. 10 (left) Black line indicates the fastest path from A to B. Image colored according to height. (right) Points visible from Point C are shaded red. This figure also shows the fastest path, incorporating traversability and covertness from an observer located at point C. Analysis based on LADAR data. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

4.3 Path finding incorporating traversability and LOS assessment based on viewshed (single known hostile observer) analysis

Traversability may not be the sole consideration in choosing a fast path. Expeditionary forces often seek a path that is not only fast, but also covert. For example, soldiers will avoid walking through an open flat field, because there is no protection from possible snipers. We use the LOS calculations to consider a variety of scenarios. For the scenario considered in this section, we consider how to find a path from point A to point B, while not being seen by a hostile observer located at point C.

The basic LOS calculation, given an observer point Y and an observed point X, gives a Boolean result that is true when mutual visibility exists. This LOS calculation includes an assumption about the height relative to the ground plane of points X and Y. We set the heights of points X and Y as 2 m, to represent a typical observer height. The viewshed calculation extends the LOS calculation over an area: a viewshed is an area that is visible to an observer located at point X. When the visibility of a point is considered from multiple viewing points, the result is not binary, but the sum of the contributions.

First we consider the problem of finding a path between points A and B, given that a hostile observer is located at point C. Figure 10 (right) shows the same spatial region as in Fig. 10 (left), and the points are still shaded according to elevation. But we have added a point C to Fig. 10 (right), to represent a hostile observer and points that are visible from point C are shaded red. In Fig. 10 (right), we show the fastest path from points A to B, incorporating terrain traversability, such that a hostile observer at point C would have low visibility of the path.

4.4 Path finding incorporating traversability and LOS analysis based on aggregate viewshed (find path of least visibility assuming enemies uniformly distributed)

In the previous section, we calculated a path to minimize the visibility, relative to an opponent known to be at point C. In this section, we assume that the opponent’s location(s) are unknown. Therefore we calculate a path that is least visible, with respect to an opponent that is uniformly distributed over the region. The visibility rating assigned to point A indicates the fraction of the local terrain that can be seen by an observer located at point A. A more formal definition of visibility rating is given below:

Consider a point A. We define R as the observer radius, the distance that an observer could reasonably see. Then we may find N, the number of points within a radius R of the point A. Of these N points, M points have LOS (line-of-sight) to the point A. Then we define the visibility rating of the point A as M/N. The visibility rating ranges from 0 to 1. If all points in the region around A of radius R can see A, then the visibility rating is 1. If none of these points can see A, the visibility is 0.

The visibility rating shows how visible an area is to those around it. The visibility rating enables us to identify potential hiding places or ideal sniper locations that may not have been so obvious beforehand. Sharp edges between high visibility and low visibility could represent ideal areas from which to launch a surprise encounter, providing a way to go in and out of cover.

We refer to the process that determines the visibility rating as the aggregate viewshed. Dividing the number of times points are observed by the number of times surveyed will produce a normalized value between 0 and 1, the visibility rating. The visibility rating of any single point may be determined by a single viewshed analysis, where a single viewshed represents what that specific point can and cannot see, but cannot provide a visibility rating over a large area.

In our aggregate viewshed calculation, we chose a visibility radius limit of 150 m for the viewshed (this radius represents the distance that an observer can see). This radius should be adjusted depending on the expected sensor range capabilities in the environment the analysis is performed. To maintain performance, as well as for practical reasons of diminishing returns, we downsampled to a resolution of 20 m per viewshed. Because viewshed analysis is computationally intensive even at reduced resolution, we developed parallel code for computation of the aggregate viewshed over a large area, with a cluster of 256 parallel computers.

In Fig. 11 (left), we show the visibility rating for the same region as shown in Fig. 10. Red areas denote regions with a high visibility rating, while blue areas denote a low visibility rating. In the calculation of Fig. 11, we assumed that hostile observers are only located in the region shown in Fig. 11, and not in the neighboring regions. The viewshed highlights potential crests and rivulets that would be ideal hiding or sniping locations. These areas might not be obvious or visible to a user looking across a broad terrain or image product. Urban areas are notable regions of low visibility because buildings effectively block long-range views. Figure 11 (right) shows a path from A to B, which incorporates traversability and covertness (where covertness means having a low visibility). This requires a weighting function that balances the utility of fastest route against the importance of covertness. Covert routes consistently trace markedly different paths than traversable routes.

 

Fig. 11 (left) Visibility map. This map assumes a hostile observer is equally likely to be anywhere in the region; we refer to this as the aggregated line-of-sight calculation. In this map, the red areas have the highest visibility rating, and the blue areas have the lowest visibility. (right) Fastest path from A to B, incorporating the aggregated LOS calculation. The image is colored according to height, which makes it easier to recognize features than in the visibility map image. Analysis based on LADAR data. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

There are many other applications for LOS calculations using the LADAR data, beyond the examples we have shown here. For example, we showed the case where the location of one hostile observer is known. Using the aggregate viewshed calculation, we further considered multiple observers, uniformly distributed over a region. Conversely, the calculation can be turned around to identify the optimal place for a spotter or sniper, such that he will have the maximal view of the region. Another analysis discovers places on the edge of visibility, such that a soldier could stand at a location, have a good view of the region, but also have the ability to quickly find shelter from hostile observation/attack. These analyses could be incorporated into a tool to generate paths that are vetted for safety. Such an analysis tool could be used for search-and-rescue as well as expeditionary missions.

5. Gesture control of GIS tool

Gesture control is a developing trend in computing and robotics, sparked most recently by the introduction of the low-cost Microsoft Kinect [17]. Instead of using a mouse, the user controls the panning, zooming, and other features of the GIS tool by using a range of arm motions and physical motions. The advantage of this organic and intuitive approach is that the user does not have to take focus away from viewing the screen. Gesture control may be especially important for this LADAR project, where the GIS tool contains multiple 2D and 3D layers.

To implement gesture control, we integrated the Microsoft Kinect with the GeoFetch GIS tool. To avoid cross-platform incompatibilities, the integration occurs via keystroke mapping. With the Kinect software (we used the programming scheme Brekel Kinect [18], as well as FAAST- Flexible Action and Articulated Skeleton Toolkit [19]), we developed the system so that a gesture becomes a keystroke that controls the GeoFetch software. Gestures for user control over the virtual globe include (for example) drawing the left arm inward toward the chest to zoom in. In Fig. 12, the left panel shows the GIS screen and the right panel shows the Kinect’s perception of the user. We plan to investigate this gesture recognition technology for use in other virtual immersive environments [20].

 

Fig. 12 GeoFetch with software for gesture control of the virtual globe. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Download Full Size | PPT Slide | PDF

6. Conclusions

In this paper, we explored various analyses, and methods to display and interact with a high resolution LADAR data set. Analyses included building classification, demographic analysis, path finding, and line-of-sight analysis. While building classification has been demonstrated based solely on satellite imagery, such analysis can be misled by reflectivity variations. Furthermore, the accurate determination of structure height does require the LADAR data, or other methods of obtaining 3D imagery.

We developed the GeoFetch GIS tool to display the LADAR data and the analyses derived from the LADAR data. GeoFetch is capable of using the LADAR data as a source of elevation data, allowing us to cleanly map visual imagery onto the LADAR data. Such mapping can be displayed over broad areas in the GIS, and could have important application to disaster relief efforts.

GeoFetch also displayed the feature analyses layers shown in the paper. The user can find a town, select a building within that town by applying various filters, and then generate a safe path (with limited visibility and good traversability) to approach the building. We also explored gesture-based control of the GIS, which will enables virtual immersive environments in which the user can walk, drive, or fly through environmentally customized highlight overlays. Our group has been investigating robotic exploration of outdoor terrain [21], and in future work, we will explore application of this LADAR tool to robotic path planning.

Acknowledgments

We thank Evan Cull for his analysis of path finding algorithms. This work is sponsored by the Department of the Air Force under Air Force Contract #FA8721-05-C-0002. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the United States Government.

References and links

1. A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010). [CrossRef]  

2. The display of Fig. 1 was generated with the Eyeglass software, developed by Ross Anderson, MIT Lincoln Laboratory.

3. A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi: [CrossRef]  

4. Matlab is a product of MathWorks, http://www.mathworks.com

5. P. Cho, “3D organization of 2D urban imagery,” IEEE 2008 Geosci. Remote Sensing Symp., 2 (2008).

6. R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi: [CrossRef]  

7. Q. Wang, L. Wang, and J. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18(15), 15349–15360 (2010). [CrossRef]   [PubMed]  

8. N. Rackliffe, H. A. Yanco, and J. Casper, “Using geographic information systems (GIS) for UAV landings and UGV navigation,” Technologies for Practical Robot Applications (TEPRA) (2001).

9. J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog. 106, 6 (2007)..

10. D. G. Bell, F. Kuehnel, C. Maxwell, R. Kim, K. Kasraie, T. Gaskins, P. Hogan, and J. Coughlan, “NASA World Wind: opensource GIS for mission operations,” 2007 IEEE Aero. Conf., (2007).

11. L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001). [CrossRef]  

12. MATLAB interface by Abhishek Jaiantilal (http://code.google.com/p/randomforest-matlab/), C code by Andy Liaw and Matthew Wiener, based on FORTRAN code by Leo Breiman and Adele Cutler.

13. R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:. [CrossRef]  

14. OpenStreetMaps, http://www.openstreetmaps.org

15. P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths ,” IEEE Transactions on Systems Science and Cybernetics SSC4, 4(2) (1968).

16. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2009)

17. Microsoft Kinect, http://www.xbox.com/en-US/KINECT

18. Breckel Kinect- Tools for Kinect, http://www.breckel.com

19. FAAST- Flexible Action and Articulated Skeleton Toolkit, http://projects.ict.usc.edu/mxr/faast/

20. R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004). [CrossRef]  

21. M. R. Fetterman, T. Hughes, N. Armstrong-Crews, C. Barbu, K. Cole, R. Freking, K. Hood, J. Lacirignola, M. McLarney, A. Myne, S. Relyea, T. Vian, S. Vogl, and Z. Weber, “Distributed multi-modal sensor system for searching a foliage-covered region,” IEEE Technologies for Practical Robot Applications (TEPRA), (2011).

References

  • View by:
  • |
  • |
  • |

  1. A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
    [Crossref]
  2. The display of Fig. 1 was generated with the Eyeglass software, developed by Ross Anderson, MIT Lincoln Laboratory.
  3. A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi:
    [Crossref]
  4. Matlab is a product of MathWorks, http://www.mathworks.com
  5. P. Cho, “3D organization of 2D urban imagery,” IEEE 2008 Geosci. Remote Sensing Symp., 2 (2008).
  6. R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi:
    [Crossref]
  7. Q. Wang, L. Wang, and J. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18(15), 15349–15360 (2010).
    [Crossref] [PubMed]
  8. N. Rackliffe, H. A. Yanco, and J. Casper, “Using geographic information systems (GIS) for UAV landings and UGV navigation,” Technologies for Practical Robot Applications (TEPRA) (2001).
  9. J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog. 106, 6 (2007)..
  10. D. G. Bell, F. Kuehnel, C. Maxwell, R. Kim, K. Kasraie, T. Gaskins, P. Hogan, and J. Coughlan, “NASA World Wind: opensource GIS for mission operations,” 2007 IEEE Aero. Conf., (2007).
  11. L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001).
    [Crossref]
  12. MATLAB interface by Abhishek Jaiantilal ( http://code.google.com/p/randomforest-matlab/ ), C code by Andy Liaw and Matthew Wiener, based on FORTRAN code by Leo Breiman and Adele Cutler.
  13. R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
    [Crossref]
  14. OpenStreetMaps, http://www.openstreetmaps.org
  15. P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4,  4(2) (1968).
  16. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2009)
  17. Microsoft Kinect, http://www.xbox.com/en-US/KINECT
  18. Breckel Kinect- Tools for Kinect, http://www.breckel.com
  19. FAAST- Flexible Action and Articulated Skeleton Toolkit, http://projects.ict.usc.edu/mxr/faast/
  20. R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004).
    [Crossref]
  21. M. R. Fetterman, T. Hughes, N. Armstrong-Crews, C. Barbu, K. Cole, R. Freking, K. Hood, J. Lacirignola, M. McLarney, A. Myne, S. Relyea, T. Vian, S. Vogl, and Z. Weber, “Distributed multi-modal sensor system for searching a foliage-covered region,” IEEE Technologies for Practical Robot Applications (TEPRA), (2011).

2010 (2)

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Q. Wang, L. Wang, and J. Sun, “Rotation-invariant target recognition in LADAR range imagery using model matching approach,” Opt. Express 18(15), 15349–15360 (2010).
[Crossref] [PubMed]

2007 (1)

J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog. 106, 6 (2007)..

2005 (1)

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

2001 (1)

L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001).
[Crossref]

1968 (1)

P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4,  4(2) (1968).

Breiman, L.

L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001).
[Crossref]

Burnside, J. W.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Campbell, J. B.

J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog. 106, 6 (2007)..

Cannata, R.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Crawford, M. M.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Davis, W. R.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Fried, D.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Greisokh, D.

A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi:
[Crossref]

Hart, P. E.

P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4,  4(2) (1968).

Hatch, R. E.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Heinrichs, R.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Heinrichs, R. M.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi:
[Crossref]

Hong, T.

R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi:
[Crossref]

Kehl, R.

R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004).
[Crossref]

Knowlton, R.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Lee, E. I.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Madhavan, R.

R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi:
[Crossref]

Magruder, L. A.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Marino, R. M.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

McLaughlin, J. L.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Neuenschwander, A. L.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Nilsson, N. J.

P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4,  4(2) (1968).

O’Brien, M.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Raphael, B.

P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4,  4(2) (1968).

Rich, G. C.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Rowe, G. S.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Skelly, L. J.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Square, T. E.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Stanley, B. M.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Sun, J.

Van Gool, L.

R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004).
[Crossref]

Vasile, A.

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi:
[Crossref]

Wang, L.

Wang, Q.

Waugh, F. R.

A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi:
[Crossref]

Weed, C. A.

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

A formal basis for the heuristic determination of minimum cost paths (1)

P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and CyberneticsSSC4,  4(2) (1968).

J. Geog. (1)

J. B. Campbell, “GloVis as a resource for teaching geographic content and concepts,” J. Geog. 106, 6 (2007)..

Laser Radar Technology and Applications (1)

R. M. Marino, W. R. Davis, G. C. Rich, J. L. McLaughlin, E. I. Lee, B. M. Stanley, J. W. Burnside, G. S. Rowe, R. E. Hatch, T. E. Square, L. J. Skelly, M. O’Brien, A. Vasile, and R. M. Heinrichs, “High-resolution 3D imaging laser radar flight test experiments,” Proc. SPIE 5791, Laser Radar Technology and Applications X, (2005), doi:.
[Crossref]

Mach. Learn. (1)

L. Breiman, “Random forests,” Mach. Learn. 45(1), 5–32 (2001).
[Crossref]

Opt. Express (1)

Proc. SPIE (1)

A. L. Neuenschwander, M. M. Crawford, L. A. Magruder, C. A. Weed, R. Cannata, D. Fried, R. Knowlton, and R. Heinrichs, “Terrain classification of LADAR data over Haitian urban environments using a lower envelope follower and adaptive gradient operator,” Proc. SPIE 7684, 768408, 768408-9 (2010).
[Crossref]

Other (15)

The display of Fig. 1 was generated with the Eyeglass software, developed by Ross Anderson, MIT Lincoln Laboratory.

A. Vasile, F. R. Waugh, D. Greisokh, and R. M. Heinrichs, “Automatic alignment of color imagery onto 3D laser radar data,” 35th Applied Imagery and Pattern Recognition Workshop (2006); doi:
[Crossref]

Matlab is a product of MathWorks, http://www.mathworks.com

P. Cho, “3D organization of 2D urban imagery,” IEEE 2008 Geosci. Remote Sensing Symp., 2 (2008).

R. Madhavan and T. Hong, “Robust detection and recognition of buildings in urban environments from LADAR data,” 33rd Applied Imagery Pattern Recognition Workshop (2004) doi:
[Crossref]

N. Rackliffe, H. A. Yanco, and J. Casper, “Using geographic information systems (GIS) for UAV landings and UGV navigation,” Technologies for Practical Robot Applications (TEPRA) (2001).

D. G. Bell, F. Kuehnel, C. Maxwell, R. Kim, K. Kasraie, T. Gaskins, P. Hogan, and J. Coughlan, “NASA World Wind: opensource GIS for mission operations,” 2007 IEEE Aero. Conf., (2007).

MATLAB interface by Abhishek Jaiantilal ( http://code.google.com/p/randomforest-matlab/ ), C code by Andy Liaw and Matthew Wiener, based on FORTRAN code by Leo Breiman and Adele Cutler.

OpenStreetMaps, http://www.openstreetmaps.org

T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms (MIT Press, 2009)

Microsoft Kinect, http://www.xbox.com/en-US/KINECT

Breckel Kinect- Tools for Kinect, http://www.breckel.com

FAAST- Flexible Action and Articulated Skeleton Toolkit, http://projects.ict.usc.edu/mxr/faast/

R. Kehl and L. Van Gool, “Real-time pointing gesture recognition for an immersive environment,” Proc. Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, (2004).
[Crossref]

M. R. Fetterman, T. Hughes, N. Armstrong-Crews, C. Barbu, K. Cole, R. Freking, K. Hood, J. Lacirignola, M. McLarney, A. Myne, S. Relyea, T. Vian, S. Vogl, and Z. Weber, “Distributed multi-modal sensor system for searching a foliage-covered region,” IEEE Technologies for Practical Robot Applications (TEPRA), (2011).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 (left) High-resolution LADAR data. Image displayed with Eyeglass software package [2]. (right) Satellite imagery of the same region as on the left. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
Fig. 2
Fig. 2 The National Palace in Port-Au-Prince, Haiti was destroyed during the earthquake. (a) LADAR imagery with satellite imagery wrapped over it. Satellite imagery credit: DigitalGlobe (b) Aerial imagery. Image credit: Logan Abassi/UNDP, licensed under Creative Commons License. (c) Photograph from the ground. Image credit: Logan Abassi/UNDP, licensed under Creative Commons License. Approved for public release 13-387.
Fig. 3
Fig. 3 (left) The region is colored according to height. (right) The region has been segmented, and the colors indicate different objects. Analyses based on LADAR data. Approved for public release 13-387.
Fig. 4
Fig. 4 Snapshot from GeoFetch tool. Roads (cyan), buildings (blue), tall buildings (red) are shown in this image. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
Fig. 5
Fig. 5 (left) The lower right of this image shows the Haiti National Penitentiary. (right) Another area in Port-au-Prince. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
Fig. 6
Fig. 6 (left) Satellite imagery of a block in Haiti. (right) The automated classifier has identified buildings and overlaid them upon the satellite imagery from Fig. 6 (left). The buildings are randomly colored so that close buildings can be distinguished. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
Fig. 7
Fig. 7 The green regions represent clusters of buildings, or neighborhoods. Feature analysis based on LADAR data. Approved for public release 13-387.
Fig. 8
Fig. 8 When the user clicks on a building or region in the GIS tool, a window pops up (lower right) with detailed information about the object that the user clicked on. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.
Fig. 9
Fig. 9 Traversability analysis. (left) Assumed walking/driving speeds as a function of terrain slope. (right) Calculated velocities and go/no-go regions over a small region. Feature analysis based on LADAR data. Approved for public release 13-387.
Fig. 10
Fig. 10 (left) Black line indicates the fastest path from A to B. Image colored according to height. (right) Points visible from Point C are shaded red. This figure also shows the fastest path, incorporating traversability and covertness from an observer located at point C. Analysis based on LADAR data. Approved for public release 13-387.
Fig. 11
Fig. 11 (left) Visibility map. This map assumes a hostile observer is equally likely to be anywhere in the region; we refer to this as the aggregated line-of-sight calculation. In this map, the red areas have the highest visibility rating, and the blue areas have the lowest visibility. (right) Fastest path from A to B, incorporating the aggregated LOS calculation. The image is colored according to height, which makes it easier to recognize features than in the visibility map image. Analysis based on LADAR data. Approved for public release 13-387.
Fig. 12
Fig. 12 GeoFetch with software for gesture control of the virtual globe. Feature analysis based on LADAR data. Satellite imagery credit: DigitalGlobe. Approved for public release 13-387.

Metrics