While many advancements have been made this last decade in automated classification of above surface features using remote sensing data, progress for detecting underground features has lagged in this area. Technologies for detecting features, including ground penetrating radar, electrical resistivity, and magnetometry exist, but methods for feature extraction and identification mostly depend on the experience of instrument user.
One problem has been creating approaches that can deal with complex signals. Ground penetrating radar (GPR), for instance, often produces ambiguous signals that can have a lot different noise interference relative to the feature one wants to identify. One approach has been to apply approximation polynomials to classify given signals that are then inputs for an applied neural networks model using derived coefficients. This technique can help reduce noise and differentiate signals that follow clear patterns that vary from larger background signals. Differentiation of signals based on minimized coefficients are one way to simplify and better differentiate data signals. Another approach is to use multilayer perceptron that has a nonlinear activation function which transforms the data. This is effectively a similar technique but uses different transform functions than other neural network models. Applications of this approach include being able to differentiate thickness of underground structures from surrounding sediments and soil.
Other methods have been developed to determine the best location to place source and receivers that can capture relevant data. In seismic research, the use of convolutional neural networks (CNNs) has been applied to determine better positioning of sensors so that better data quality can be achieved. This has resulted in very high precision and recall rates at over 0.99. Using a series of filtered layers, signals can be assessed for their data quality with that of manually placed instruments. The quality of the placement can also be compared to other locations to see if the overall signal capture improves. Thus, rather than focusing on mainly signal processing, this method also focuses on signal placement and capture that compares to other placements to optimize data capture locations. One problem in geophysical data is inversion, where data points are interpreted to be the opposite of what they are due to a reflective signal that may hid the nature of the true data. Techniques using CNNs have also been developed whereby the patterning of data signals around a given inversion can be filtered and assessed using activation functions. Multiple layers that transform and reduce data to specific signals helps to identify where patterns of data suggest an inversion is likely, while checking if this follows patterns from other data using Bayesian learning techniques.
Radiant Earth has launched Radiant MLHub, a cloud-based open library for training geospatial data used by machine learning algorithms. In launching the repository, Radiant Earth noted that while there is an abundance of satellite imagery, there is a lack of training data and tools to train machine learning algorithms. Radiant MLHub is a federated site for the discovery and access of high-quality Earth observation (EO) training datasets and machine learning models. Individuals and organizations can contribute by sharing their own training data and models with Radiant MLHub. The data and models available on Radiant MLHub are distributed under a Creative Commons license (CC BY 4.0).
The site debuted with “crop type” training data for major crops in Kenya, Tanzania and Uganda supplied by the Radiant Earth Foundation. Future planned datasets include Global Land Cover and Surface Water as well as additions from the site’s partners. All of the datasets are stored using a SpatioTemporal Asset Catalog (STAC) compliant catalog. Per Radiant Earth: “Training datasets include pairs of imagery and labels for different types of ML problems including image classification, object detection, and semantic segmentation.”
Users interested in accessing the site’s data and models can get started by downloading this how-to-guide.
IMHO switching two software for same operation shows different result because each takes different parser and environmental parameters, but they can be tweaked too. If you can set those as close as possible, the result will surely be similar.
BTW, why do would you need separate softwares to process same image in the first place?
Hi All, Hopefully this group can give me answer on my Question.
I use EnMAP Box 3 which is plugin in QGIS. I use it for image classification which the result is classified image into 7 class in BSQ format. The algorithm I use for classification is Random Forest, Gaussian and LinearSVC.
EnMap use my Training ROI to proses in Random Forest, Gaussian and LinearSVC classifier/algorithm. But on accuracy assessment EnMAP (Random Forest, Gaussian and LinearSVC) did not use testing ROI to validated the classified image.
Because the classified image is in BSQ format, I can transfer it to Envi software (hope you know that). And test it with my Testing ROI using it tools Confusion Matrix Using Ground Truth ROIs . And I get the statistic for Overall Accuracy and Kappa value of the classified Images.
I use Envi tools/ algorithm like SVM and SAM for image classification. After get the result which is classified image into 7 class (Building, grass, path etc) . I use Envi tools Confusion Matrix Using Ground Truth ROIs . And I get the statistic for Overall Accuracy and Kappa value of the classified Images.
From the statistic create using Confusion Matrix Using Ground Truth ROIs . I compare the performance of classified images between SVM, SAM, Random Forest, Gaussian and LinearSVC by comparing the Overall Accuracy and Kappa value. All classifier like SVM, SAM, Random Forest, Gaussian and LinearSVC using same images before classify, same ROI for training and testing and same tools to validate which is Envi tools Confusion Matrix Using Ground Truth ROIs
After present my friend said something like It is not correct to compare SVC, SAM, Random Forest, Gaussian and LinearSVC. This is because SAM and SAM classified the image in Envi Environment. While Random Forest, Gaussian and LinearSVC classified the image in EnMAP @ QGIS Environment. Even using same validation tools which is Envi tools Confusion Matrix Using Ground Truth ROIs.
Is it true?