For mobile, landscape view is recommended.
There are many areas in the United States and throughout the world that are contaminated or potentially contaminated with unexploded ordnance (UXO). In the United States alone there are 1900 Formerly Used Defense Sites (FUDS) and 130 Base Realignment and Closure (BRAC) installations that need to be cleared of UXO. Using current technologies, the cost of identifying and disposing of UXO in the United States is estimated to range up to $500 billion. Site specific clearance costs vary from $400/acre for surface UXO to $1.4 million/acre for subsurface UXO. These approaches, however, usually require significant amounts of human analyst time, and thus those additional costs, which are currently necessary parts of ongoing demonstrations, are not factored into these numbers. Thus, there is a clear need to effectively and cost‐efficiently remediate UXO contaminated lands using automated procedures, rendering them safe for their current or intended civilian uses. Development of new UXO detection technologies with improved data analysis has been identified as a high priority requirement for over a decade.
The objective of this work was to develop methodologies that will allow the human analyst to be removed from the processing loop. It has been shown in a number of recent demonstrations that when the most skilled practitioners process geophysical data, select data chips for analysis, select features for classification, select one of a suite of classifiers, and manually tune the classifier boundaries, excellent classification performance can be achieved. This project aimed to develop techniques to improve target characterization and reduce classifier sensitivity to imprecision in the target characterizations, thereby reducing the need for an expert human analyst.
The technical approach focused on two main areas of research: 1) robust automated model inversion and 2) robust target classification with limited training data. The efficacy of information‐theoretic measures to improve model inversion robustness was investigated. Specifically, Fisher Information was investigated as a mechanism to select from among multiple candidate feature sets that all model the measured data similarly well. Multiple instance learning, a machine learning technique that facilitates learning the features that are indicative of the class of interest even if those features are not present for every measurement for the class of interest, was also investigated. With this more sophisticated machine learning approach to classification, a simpler and potentially more robust inversion can be implemented. Classifiers that are robust with limited training data are investigated through sensitivity studies. Several different classifiers were considered, as well as techniques to modify the training data set by removing some training data points that may be less informative to the classifier. In addition, the potential impacts of the cross‐validation method were investigated.
The classifier sensitivity studies revealed that the more robust classifiers tended to have decision surfaces that gradually transition from decision statistics that are strongly indicative of UXO to decision statistics that are strongly indicative of clutter. These classifiers tended to produce moderate decision statistics in the vicinity of the UXO clusters, thereby allowing test UXO that may have features somewhat different from the training UXO to be assigned decision statistics that are not strongly indicative of clutter. The model inversion investigations revealed that although information‐theoretic approaches do provide some benefit, they are not necessarily consistent in doing so. Multiple Instance Learning, however, appears to provide a substantial computational benefit, in that it can attain performance similar to that obtained with the full model inversion, but with a small fraction of the computation time.
Given a performance goal of minimizing the probability of false alarm (PFA) at probability of detection (PD) = 1, there are two important aims: 1) ensuring consistent UXO characterization via features, and 2) ensuring the chosen classifier is insensitive to UXO target features that may lie somewhat outside the cluster of most UXO features. The first aim ensures that when UXO are characterized in training, that characterization can be repeated in testing, even if site conditions vary. The second aim ensures that if, for some reason, a UXO target’s characterization is not completely consistent with previously observed UXO, the classifier still produces a decision statistic which allows for the target to be classified as UXO (i.e., the decision statistic is not strongly indicative of clutter). Both of these aims reduce risk by improving the quality UXO characterization via features and reducing classifier sensitivity to the precision of estimated UXO features.