Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Lurker last won the day on December 12

Lurker had the most liked content!

Community Reputation

2,191 Celebrity

About Lurker

  • Rank
    Associate Professor
  • Birthday 02/13/1983

Profile Information

  • Gender
  • Location
  • Interests
    GIS and Remote Sensing

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. While many advancements have been made this last decade in automated classification of above surface features using remote sensing data, progress for detecting underground features has lagged in this area. Technologies for detecting features, including ground penetrating radar, electrical resistivity, and magnetometry exist, but methods for feature extraction and identification mostly depend on the experience of instrument user. One problem has been creating approaches that can deal with complex signals. Ground penetrating radar (GPR), for instance, often produces ambiguous signals that can have a lot different noise interference relative to the feature one wants to identify. One approach has been to apply approximation polynomials to classify given signals that are then inputs for an applied neural networks model using derived coefficients. This technique can help reduce noise and differentiate signals that follow clear patterns that vary from larger background signals. Differentiation of signals based on minimized coefficients are one way to simplify and better differentiate data signals.[1] Another approach is to use multilayer perceptron that has a nonlinear activation function which transforms the data. This is effectively a similar technique but uses different transform functions than other neural network models. Applications of this approach include being able to differentiate thickness of underground structures from surrounding sediments and soil.[2] Other methods have been developed to determine the best location to place source and receivers that can capture relevant data. In seismic research, the use of convolutional neural networks (CNNs) has been applied to determine better positioning of sensors so that better data quality can be achieved. This has resulted in very high precision and recall rates at over 0.99. Using a series of filtered layers, signals can be assessed for their data quality with that of manually placed instruments. The quality of the placement can also be compared to other locations to see if the overall signal capture improves. Thus, rather than focusing on mainly signal processing, this method also focuses on signal placement and capture that compares to other placements to optimize data capture locations.[3] One problem in geophysical data is inversion, where data points are interpreted to be the opposite of what they are due to a reflective signal that may hid the nature of the true data. Techniques using CNNs have also been developed whereby the patterning of data signals around a given inversion can be filtered and assessed using activation functions. Multiple layers that transform and reduce data to specific signals helps to identify where patterns of data suggest an inversion is likely, while checking if this follows patterns from other data using Bayesian learning techniques.[4] source: https://www.gislounge.com/automated-remote-sensing-of-underground-features/
  2. Radiant Earth has launched Radiant MLHub, a cloud-based open library for training geospatial data used by machine learning algorithms. In launching the repository, Radiant Earth noted that while there is an abundance of satellite imagery, there is a lack of training data and tools to train machine learning algorithms. Radiant MLHub is a federated site for the discovery and access of high-quality Earth observation (EO) training datasets and machine learning models. Individuals and organizations can contribute by sharing their own training data and models with Radiant MLHub. The data and models available on Radiant MLHub are distributed under a Creative Commons license (CC BY 4.0). The site debuted with “crop type” training data for major crops in Kenya, Tanzania and Uganda supplied by the Radiant Earth Foundation. Future planned datasets include Global Land Cover and Surface Water as well as additions from the site’s partners. All of the datasets are stored using a SpatioTemporal Asset Catalog (STAC) compliant catalog. Per Radiant Earth: “Training datasets include pairs of imagery and labels for different types of ML problems including image classification, object detection, and semantic segmentation.” Users interested in accessing the site’s data and models can get started by downloading this how-to-guide. link:
  3. Interesting articles : North-South displacement field - 1999 Hector-Mine earthquake, California In complement to seismological records, the knowledge of the ruptured fault geometry and co-seismic ground displacements are key data to investigate the mechanics of seismic rupture. This information can be retrieved from sub-pixel correlation of optical images. We are investigating the use of SPOT (Satellite pour l'Observation de la Terre) satellites images. The technique developed here is attractive due to the operational status of a number of optical imaging programs and the availability of archived data. However, uncertainties on the imaging system itself and on its attitude dramatically limit its potential. We overcome these limitations by applying an iterative corrective process allowing for precise image registration that takes advantage of the availability of accurate Digital Elevation Models with global coverage (SRTM). This technique is thus a valuable complement to SAR interferometry which provides accurate measurements kilometers away from the fault but generally fails in the near-fault zone where the fringes get noisy and saturated. Comparison between the two methods is briefly discussed, with application on the 1992 Landers earthquake in California (Mw 7.3). Applications of this newly developped technique are presented: the horizontal co-seismic displacement fields induced by the 1999 Hector-Mine earthquake in California (Mw 7.1) and by the 1999 Chichi earthquake in Taiwan (Mw 7.5) have recently been retrieved using archive images. Data obtained can be downloaded (see further down) Latest Study Cases Sub-pixel correlation of optical images Following is the flow chart of the technique that as been developped. It allows for precise orthorectification and coregistration of the SPOT images. More details about the optimization process will be given in the next sections. Understanding the disparities measured from Optical Images Differences in geometry between the two images to be registered: - Uncertainties on attitudes parameters (roll, pitch, yaw) - Inaccuracy on orbital parameters (position, velocity) - Incidence angle differences + topography uncertainties (parallax effect) - Optical and Electronic biases (optical aberrations, CCD misalignment, focal length, sampling period, etc… ) » May account for disparities up to 800 m on SPOT 1,2,3,4 images; 50m for SPOT 5 (see [3]). Ground deformations: - Earthquakes, land slides, etc… » Typically subpixel scale: ranging from 0 to 10 meters. Temporal decorrelation: - Changes in vegetation, rivers, changes in urban areas, etc… » Correlation is lost: add noise to the measurement – up to 1m. » Ground deformations are largely dominated by the geometrical artifacts. Precise registration: geometrical corrections SPOT (Systeme pour l'Observation de la Terre) satellites are pushbroom imaging systems ([1],[2]): all optical parts remain fixed during acquisition and the scanning is accomplished by the forward motion of the spacecraft. Each line in the image is then acquired at a different time and submitted to the different variations of the platform. The orthorectification process consists in modeling and correcting these variations to produce cartographic distortion free images. It is then possible to accurately register images and look for their disparities using correlation techniques. Attitude variations (roll, pitch, and yaw) during the scanning process have to be integrated in the image model (see [1],[2]). Errors in correcting the satellite look directions will result in projecting the image pixels at the wrong location on the ground: important parallax artifacts will be seen when measuring displacement between two images. Exact pixel projection on the ground is achieved through an optimization algorithm that iteratively corrects the look directions by selecting ground control points. An accurate topography model has to be used. What parameters to optimize? - Initial attitudes values of the platform (roll, pitch, yaw), - Constant drift of the attitude values along the image acquisition, - Focal length (different value depending on the instrument , HRG1 – HRG2), - Position and velocity. How to optimize: Iterative algorithm using a set of GCPs (Ground Control Points). GCPs are generated automatically with a subpixel accuracy: they result from a correlation between an orthorectified reference frame and the rectified image whose parameters are to be optimized. A two stages procedure: - One of the image is optimized with respect to the shaded DEM (GCP are generated from the correlation with the shaded DEM). The DEM is then considered as the ground truth. No GPS points are needed. - The other image is then optimized using another set of GCP resulting from the correlation with the first image (co-registration). Measuring co-seismic deformation with InSAR, a comparison A fringe represents a near-vertical displacement of 2.8 cm SAR interferogram (ERS): near-vertical component of the ground displacement induced by the 1992 Landers earthquake [Massonnet et al., 1993]. No organized fringes in a band within 5-10 km of the fault trace: displacement sufficiently large that the change in range across a radar pixel exceeds one fringe per pixel, coherence is lost. http://earth.esa.int/applications/data_util/ndis/equake/land2.htm » SAR interferometry is not a suitable technique to measure near fault displacements The 1992 Landers earthquake revisited: Profile in offsets and elastic modeling show good agreement From: [6] - Measuring earthqakes from optical satellite images, Van Puymbroeck, Michel, Binet, Avouac, Taboury - Applied Optics Vol. 39, No 20, 10 July 2000 Other applications of the technique, see [4], [5]. » Fault ruptures can be imaged from this technique Applying the precise rectification algorithm + subpixel correlation: The 1999 Hector-Mine earthquake (Mw 7.1, California) Obtaining the Data (available in ENVI file Format. Load banbs as gray scale images. Bands are: N/S offsets, E/W offsets, SNR): Raw and filtered results: HectorMine.zip Pre-earthquake image: SPOT 4, acquisition date: 08-17-1998 Ground resolution: 10m Post-earthquake image: SPOT 2, acquisition date: 08-18-2000 Ground resolution: 10m Offsets measured from correlation: Correspond to sub-pixel offsets in the raw images. Correlation windows: 32 x 32 pixels 96m between two measurements So far we have: - A precise mapping of the rupture zone: the offsets field have a resolution of 96 m, - Measurements with a subpixel accuracy (displacement of at most 10 meters), - Improved the global georeferencing of the images with no GPS measurements, - Improved the processing time since the GCP selection is automatic, - Suppressed the main attitude artifacts. The profiles do not show any long wavelength deformations (See Dominguez et al. 2003) We notice: - Linear artifacts in the along track direction due to CCD misalignments, Schematic of a DIVOLI showing four CCD linear arrays. - Some topographic artifacts: the image resolution is higher than the DEM one, - Several decorrelations due to rivers and clouds, - High frequency noise due to the noise sensitivity of the Fourier correlator (See Van Puymbroeck et al.). Conclusion Subpixel correlation technique has been improved to overcome most of its limitations: » Precise rectification and co-registration of the images, » No more topographic effects (depending on the DEM resolution), » No need for GPS points – independent and automatic algorithm, » Better spatial resolution (See Van Puymbroeck et al.) To be improved: » Stripes due to the CCD’s misalignment, » high frequency noise from the correlator, » Process images with corrupted telemetry. » The subpixel correlation technique appears to be a valuable complement to SAR interferometry for ground deformation measurements. References: [1] SPOT 5 geometry handbook: ftp://ftp.spot.com/outgoing/SPOT_docs/geometry_handbook/S-NT-73-12-SI.pdf [2] SPOT User's Handbook Volume 1 - Reference Manual: ftp://ftp.spot.com/outgoing/SPOT_docs/SPOT_User's Handbook/SUHV1RM.PDF [3] SPOT 5 Technical Summary ftp://ftp.spot.com/outgoing/SPOT_docs/technical/spot5_tech_slides.ppt [4] Dominguez S., J.P. Avouac, R. Michel Horizontal co-seismic deformation of the 1999 Chi-Chi earthquake measured from SPOT satellite images: implications for the seismic cycle along the western foothills of Central Taiwan, J. Geophys. Res., 107, 10 1029/2001JB00482, 2003. [5] Michel, R. et J.P., Avouac, Deformation due to the 17 August Izmit earthquake measured from SPOT images, J. Geophys. Res., 107, 10 1029/2000JB000102, 2002. [6] Van Puymbroeck, N., Michel, R., Binet, R., Avouac, J.P. and Taboury, J. Measuring earthquakes from optical satellite images, Applied Optics Information Processing, 39, 23, 3486–3494, 2000. Publications: Leprince S., Barbot S., Ayoub F., Avouac, J.P. Automatic, Precise, Ortho-rectification and Co-registration for Satellite Image Correlation, Application to Seismotectonics. To be submitted. Conferences: F Levy, Y Hsu, M Simons, S Leprince, J Avouac. Distribution of coseismic slip for the 1999 ChiChi Taiwan earthquake: New data and implications of varying 3D fault geometry. AGU 2005 Fall meeting, San Francisco. M Taylor, S Leprince, J Avouac. A Study of the 2002 Denali Co-seismic Displacement Using SPOT Horizontal Offsets, Field Measurements, and Aerial Photographs. AGU 2005 Fall meeting, San Francisco. Y Kuo, F Ayoub, J Avouac, S Leprince, Y Chen, J H Shyu, Y Kuo. Co-seismic Horizontal Ground Slips of 1999 Chi-Chi Earthquake (Mw 7.6) Deduced From Image-Comparison of Satellite SPOT and Aerial Photos. AGU 2005 Fall meeting, San Francisco. source: http://www.tectonics.caltech.edu/geq/spot_coseis/
  4. Lurker


    please elaborate, what do you plan on using remote sensing data to make some animation? please add the details simple example for their functions can be see here: http://animove.org/wp-content/uploads/2019/04/Daniel_Palacios_animate_moveVis.html
  5. Google says it has built a computer that is capable of solving problems that classical computers practically cannot. According to a report published in the scientific journal Nature, Google's processor, Sycamore, performed a truly random-number generation in 200 seconds. That same task would take about 10,000 years for a state-of-the-art supercomputer to execute. The achievement marks a major breakthrough in the technology world's decadeslong quest to use quantum mechanics to solve computational problems. Google CEO Sundar Pichai wrote that the company started exploring the possibility of quantum computing in 2006. In classical computers, bits can store information as either a 0 or a 1 in binary notation. Quantum computers use quantum bits, or qubits, which can be both 0 and 1. According to Google, the Sycamore processor uses 53 qubits, which allows for a drastic increase in speed compared with classical computers. The report acknowledges that the processor's practical applications are limited. Google says Sycamore can generate truly random numbers without utilizing pseudo-random formulas that classical computers use. Pichai called the success of Sycamore the "hello world" moment of quantum computing. "With this breakthrough we're now one step closer to applying quantum computing to—for example—design more efficient batteries, create fertilizer using less energy, and figure out what molecules might make effective medicines," Pichai wrote. IBM has pushed back, saying Google hasn't achieved supremacy because "ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity." On its blog, IBM further discusses its objections to the term "quantum supremacy." The authors write that the term is widely misinterpreted. "First because, as we argue above, by its strictest definition the goal has not been met," IBM's blog says. "But more fundamentally, because quantum computers will never reign 'supreme' over classical computers, but will rather work in concert with them, since each have their unique strengths." News of Google's breakthrough has raised concerns among some people, such as presidential hopeful Andrew Yang, who believe quantum computing will render password encryption useless. Theoretical computer science professor Scott Aaronson refuted these claims on his blog, writing that the technology needed to break cryptosystems does not exist yet. The concept of quantum computers holding an advantage over classical computers has dated back to the early 1980s. In 2012, John Preskill, a professor of theoretical physics at Caltech, coined the term "quantum supremacy." source: https://www.npr.org/2019/10/23/772710977/google-claims-to-achieve-quantum-supremacy-ibm-pushes-back
  6. ndak perlu didownload menurut saya. malah langsung saja rovernya di proses langsung di webnya BIG ke sini: http://inacors.big.go.id/SBC/spider-business-center daftar aja, trus nanti data rovernya di upload aja, trus proses langsung disitu, malah langsung jadi datanya
  7. just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
  8. I have bunch of Yahoogroups , some related on GIS, Remote Sensing, astronomy, and etc, Yahoo announced that all Yahoo Groups will be shut down on Monday, October 21st, and all Groups content will be removed on December 14th. After October 21st, users will no longer be able to upload new content to groups, but it will still remain on the network. On December 14th, the following types of content will be removed from Yahoo Groups: Files Polls Links Photos Folders Calendar Database Attachments Conversations Email Updates Message Digest Message History Going forward, Yahoo Groups will become harder to join as any currently public group will now be restricted or private. They can still be found in a search, but users will have to submit a request to join them. In order to save content from a group before it’s removed, simply sign in to your Yahoo account and download the files directly from your group’s page. First you’ll have to request the data, then Yahoo will send an email when it’s ready for download. source: https://www.searchenginejournal.com/yahoo-to-shut-down-all-yahoo-groups-on-october-21st/330695/#close
  9. The first thing to do before mapping is to set up the camera parameters. Before to set up camera parameters, recommended resetting the all parameters on camera first. To set camera parameters manually need to set to manual mode. Image quality: Extra fine Shutter speed: to remove blur from photo shutter speed should be set for higher value. 1200–1600 is recommended. Higher the shutter speed reduce image quality . if there is blur in the image increase shutter speed ISO: lower the ISO higher image quality. ISO between 160–300 is recommended. if there is no blur but image quality is low, reduce ISO. Focus: Recommended to set the focus manually on the ground before a flight. Direct camera to an object which is far, and slightly increase the focus, you will see on camera screen that image sharpness changes by changing the value. Set the image sharpness at highest. (slide the slider close to infinity point on the screen you will see the how image sharpness changes by sliding) White balance: recommended to set to auto. On surveying mission Sidelap, Overlap, Buffer have to be set higher to get better quality surveying result. First set the RESOLUTION which you would like to get for your surveying project. When you change resolution it changes flight altitude and also effects the coverage in a single flight. Overlap: 70% This will increase the number of photos taken during each flight line. The camera should be capable to capture faster. Sidelap: recommended 70% Flying with higher side-lap between each line of the flight is a way to get more matches in the imagery, but it also reduces the coverage in a single flight Buffer: 12% Buffer increases the flight plane to get more images from borders. It will improve the quality of the map source: https://dronee.aero/blogs/dronee-pilot-blog/few-things-to-set-correctly-to-get-high-quality-surveying-results
  10. i just saw this on the news, nice design
  11. sep ganz, ehehehehe
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.