Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 06/02/2019 in all areas

  1. 3 points
    Our objective is to provide the scientific and civil communities with a state-of-the-art global digital elevation model (DEM) derived from a combination of Shuttle Radar Topography Mission (SRTM) processing improvements, elevation control, void-filling and merging with data unavailable at the time of the original SRTM production: NASA SRTM DEMs created with processing improvements at full resolution NASA's Ice, Cloud,and land Elevation Satellite (ICESat)/Geoscience Laser Altimeter (GLAS) surface elevation measurements DEM cells derived from stereo optical methods using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data from the Terra satellite Global DEM (GDEM) ASTER products developed for NASA and the Ministry of Economy, Trade and Industry of Japan by Sensor Information Laboratory Corp National Elevation Data for US and Mexico produced by the USGS Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) developed by the USGS and the National Geospatial-Intelligence Agency (NGA) Canadian Digital Elevation Data produced by Natural Resources Canada We propose a significant modernization of the publicly- and freely-available DEM data. Accurate surface elevation information is a critical component in scientific research and commercial and military applications. The current SRTM DEM product is the most intensely downloaded dataset in NASA history. However, the original Memorandum of Understanding (MOU) between NASA and NGA has a number of restrictions and limitations; the original full resolution, one-arcsecond data are currently only available over the US and the error, backscatter and coherence layers were not released to the public. With the recent expiration of the MOU, we propose to reprocess the original SRTM raw radar data using improved algorithms and incorporating ancillary data that were unavailable during the original SRTM processing, and to produce and publicly release a void-free global one-arcsecond (~30m) DEM and error map, with the spacing supported by the full-resolution SRTM data. We will reprocess the entire SRTM dataset from raw sensor measurements with validated improvements to the original processing algorithms. We will incorporate GLAS data to remove artifacts at the optimal step in the SRTM processing chain. We will merge the improved SRTM strip DEMs, refined ASTER and GDEM V2 DEMs, and GLAS data using the SRTM mosaic software to create a seamless, void-filled NASADEM. In addition, we will provide several new data layers not publicly available from the original SRTM processing: interferometric coherence, radar backscatter, radar incidence angle to enable radiometric correction, and a radar backscatter image mosaic to be used as a layer for global classification of land cover and land use. This work leverages an FY12 $1M investment from NASA to make several improvements to the original algorithms. We validated our results with the original SRTM products and ancillary elevation information at a few study sites. Our approach will merge the reprocessed SRTM data with the DEM void-filling strategy developed during NASA's Making Earth System Data Records for Use in Research Environments (MEaSUREs) 2006 project, "The Definitive Merged Global Digital Topographic Data Set" of Co-Investigator Kobrick. NASADEM is a significant improvement over the available three-arcsecond SRTM DEM primarily because it will provide a global DEM and associated products at one-arcsecond spacing. ASTER GDEM is available at one-arcsecond spacing but has true spatial resolution generally inferior to SRTM one-arcsecond data and has much greater noise problems that are particularly severe in tropical (cloudy) areas. At one-arcsecond, NASADEM will be superior to GDEM across almost all SRTM coverage areas, but will integrate GDEM and other data to extend the coverage. Meanwhile, DEMs from the Deutsches Zentrum für Luft- und Raumfahrt Tandem-X mission are being developed as part of a public-private partnership. However, these data must be purchased and are not redistributable. NASADEM will be the finest resolution, global, freely-available DEM products for the foreseeable future. data page: https://lpdaac.usgs.gov/products/nasadem_hgtv001/ news links: https://earthdata.nasa.gov/esds/competitive-programs/measures/nasadem
  2. 3 points
    Interesting application of WebGIS to plot Dinosaur database, and you can search how is your place in the past on the interactive globe Map. Welcome to the internet's largest dinosaur database. Check out a random dinosaur, search for one below, or look at our interactive globe of ancient Earth! Whether you are a kid, student, or teacher, you'll find a rich set of dinosaur names, pictures, and facts here. This site is built with PaleoDB, a scientific database assembled by hundreds of paleontologists over the past two decades. check this interactive webgis apps: https://dinosaurpictures.org/ancient-earth#170 official link: https://dinosaurpictures.org/
  3. 3 points
    link: https://press.anu.edu.au/publications/new-releases
  4. 3 points
    Interesting video on How Tos: WebOpenDroneMap is a friendly Graphical User Interfase (GUI) of OpenDroneMap. It enhances the capabilities of OpenDroneMap by providing a easy tool for processing drone imagery with bottoms, process status bars, and a new way to store images. WebODM allows to work by projects, so the user can create different projects and process the related images. As a whole, WebODM in Windows is a implementation of PostgresSQL, Node, Django and OpenDroneMap and Docker. The software instalation requires 6gb of disk space plus Docker. It seem huge but it is the only way to process drone imagery in Windows using just open source software. We definitely see a huge potential of WebODM for the image processing, therefore we have done this tutorial for the installation and we will post more tutorial for the application of WebODM with drone images. For this tutorial you need Docker Toolbox installed on your computer. You can follow this tutorial to get Docker on your pc: https://www.hatarilabs.com/ih-en/tutorial-installing-docker You can visit the WebODM site on GitHub: https://github.com/OpenDroneMap/WebODM Videos The tutorial was split in three short videos. Part 1 https://www.youtube.com/watch?v=AsMSoWAToxE Part 2 https://www.youtube.com/watch?v=8GKx3fz0qgE Part 3 https://www.youtube.com/watch?v=eCZFzaXyMmA
  5. 3 points
    7th International Conference on Computer Science and Information Technology (CoSIT 2020) January 25 ~ 26, 2020, Zurich, Switzerland https://cosit2020.org/ Scope & Topics 7th International Conference on Computer Science and Information Technology (CoSIT 2020) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field. Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe · Geographical Information Systems/ Global Navigation Satellite Systems (GIS/GNSS) Paper Submission Authors are invited to submit papers through the conference Submission system. Here’s where you can reach us : [email protected] or [email protected]
  6. 3 points
    The first thing to do before mapping is to set up the camera parameters. Before to set up camera parameters, recommended resetting the all parameters on camera first. To set camera parameters manually need to set to manual mode. Image quality: Extra fine Shutter speed: to remove blur from photo shutter speed should be set for higher value. 1200–1600 is recommended. Higher the shutter speed reduce image quality . if there is blur in the image increase shutter speed ISO: lower the ISO higher image quality. ISO between 160–300 is recommended. if there is no blur but image quality is low, reduce ISO. Focus: Recommended to set the focus manually on the ground before a flight. Direct camera to an object which is far, and slightly increase the focus, you will see on camera screen that image sharpness changes by changing the value. Set the image sharpness at highest. (slide the slider close to infinity point on the screen you will see the how image sharpness changes by sliding) White balance: recommended to set to auto. On surveying mission Sidelap, Overlap, Buffer have to be set higher to get better quality surveying result. First set the RESOLUTION which you would like to get for your surveying project. When you change resolution it changes flight altitude and also effects the coverage in a single flight. Overlap: 70% This will increase the number of photos taken during each flight line. The camera should be capable to capture faster. Sidelap: recommended 70% Flying with higher side-lap between each line of the flight is a way to get more matches in the imagery, but it also reduces the coverage in a single flight Buffer: 12% Buffer increases the flight plane to get more images from borders. It will improve the quality of the map source: https://dronee.aero/blogs/dronee-pilot-blog/few-things-to-set-correctly-to-get-high-quality-surveying-results
  7. 3 points
    The GeoforGood Summit 2019 drew its curtains close on 19 Sep 2019 and as a first time attendee, I was amazed to see the number of new developments announced at the summit. The summit — being a first of its kind — combined the user summit and the developers summit into one to let users benefit from the knowledge of new tools and developers understand the needs of the user. Since my primary focus was on large scale geospatial modeling, I attended the workshops and breakout sessions related to Google Earth Engine only. With that, let’s look at 3 new exciting developments to hit Earth Engine Updated documentation on machine learning Documentation really? Yes! As an amateur Earth Engine user myself, my number one complaint of the tool has been its abysmal quality of documentation spread between its app developers site, Google Earth’s blog, and their stack exchange answers. So any updates to the documentation is welcome. I am glad that the documentation has been updated to help the ever-exploding user base of geospatial data scientists interested in implementing machine learning and deep learning models. The documentation comes with its own example Colab notebooks. The Example notebooks include supervised classification, unsupervised classification, dense neural network, convolutional neural network, and deeplearning on Google Cloud. I found that these notebooks were incredibly useful to me to get started as there are quite a few non-trivial data type conversions ( int to float32 and so on) in the process flow. Earth Engine and AI Platform Integration Nick Clinton and Chris Brown jointly announced the much overdue Earth Engine + Google AI Platform integration. Until now, users were essentially limited to running small jobs on Google Colab’s virtual machine (VM) and hoping that the connection with the VM doesn’t time out (which usually lasts for about 4 hours). Other limitations include lack of any task monitoring or queuing capabilities. Not anymore! The new ee.Model() package let’s users communicate with a Google Cloud server that they can spin up based on their own needs. Needless to say, this is a HUGE improvement over the previous primitive deep learning support provided on the VM. Although it was free, one could simply not train, validate, predict, and deploy any model larger than a few layers. It had to be done separately on the Google AI Platform once the .TFRecord objects were created in their Google bucket. With this cloud integration, that task has been simplified tremendously by letting users run and test their models right from the Colab environment. The ee.Model() class comes with some useful functions such as ee.Model.fromAIPlatformPredictor() to make predictions on Earth Engine data directly from your model sitting on Google Cloud. Lastly, since your model now sits in the AI Platform, you can cheat and use your own models trained offline to predict on Earth Engine data and make maps of its output. Note that your model must be saved using tf.contrib.saved_model format if you wish to do so. The popular Keras function model.save_model('model.h5') is not compatible with ee.Model(). Moving forward, it seems like the team plans to stick to the Colab Python IDE for all deep learning applications. However, it’s not a death blow for the loved javascript code editor. At the summit, I saw that participants still preferred the javascript code editor for their non-neural based machine learning work (like support vector machines, random forests etc.). Being a python lover myself, I too go to the code editor for quick visualizations and for Earth Engine Apps! I did not get to try out the new ee.Model() package at the summit but Nick Clinton demonstrated a notebook where a simple working example has been hosted to help us learn the function calls. Some kinks still remain in the development— like limiting a convolution kernel to only 144 pixels wide during prediction because of “the way earth engine communicates with cloud platform” — but he assured us that it will be fixed soon. Overall, I am excited about the integration because Earth Engine is now a real alternative for my geospatial computing work. And with the Earth Engine team promising more new functions in the ee.Model() class, I wonder if companies and labs around the world will start migrating their modeling work to Earth Engine. Cooler Visualizations! Matt Hancher and Tyler Erickson displayed some new functionality related to visualizations and I found that it made it vastly simpler to make animated visuals. With ee.ImageCollection.getVideoThumbURL() function, you can create your own animated gifs within a few seconds! I tried it on a bunch of datasets and the speed of creating the gifs was truly impressive. Say bye to exporting each iteration of a video to your drive because these gifs appear right at the console using the print() command! Shown above is an example of global temperature forecast by time from the ‘NOAA/GFS0P25’ dataset. The code for making the gif can be found here. The animation is based on the example shown in the original blog post by Michael DeWitt and I referred to this gif-making tutorial on the developers page to make it. I did not get to cover all the new features and functionality introduced at the summit. For that, be on the lookout for event highlights on Google Earth’s blog. Meanwhile, you can check out the session resources from the summit for presentations and notebooks on topics that you are interested in. Presentation and resources Published in Medium
  8. 3 points
    found this interesting tutorial : For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us. Initially, the results were pretty bad. With PostgreSQL 10, it was possible to force some parallel queries by jimmying with global cost parameters, but nothing would execute in parallel out of the box. With PostgreSQL 11, we got support for parallel aggregates, and those tended to parallelize in PostGIS right out of the box. However, parallel scans still required some manual alterations to PostGIS function costs, and parallel joins were basically impossible to force no matter what knobs you turned. With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of: Parallel sequence scans, Parallel aggregates, and Parallel joins!! TL;DR: PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions. What Changed With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour. For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively. Setup In order to run these tests yourself, you will need: PostgreSQL 12 PostGIS 3.0 You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best. The setup instructions show where to download the Canadian polling division data used for the testing: pd a table of ~70K polygons pts a table of ~70K points pts_10 a table of ~700K points pts_100 a table of ~7M points We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes. When max_parallel_workers_per_gather is set to 0, parallel plans are not an option. max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2. Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker. show max_worker_processes; show max_parallel_workers; show max_parallel_workers_per_gather; Aggregates Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT Sum(ST_Area(geom)) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. Scans The simplest spatial parallel scan adds a spatial function to the target list or filter clause. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT ST_Area(geom) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11. Gather (cost=1000.00..27361.20 rows=69534 width=8) Workers Planned: 3 -> Parallel Seq Scan on pd (cost=0.00..19407.80 rows=22430 width=8) Joins Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get: SET max_parallel_workers_per_gather = 4; EXPLAIN SELECT * FROM pd JOIN pts_100 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..837378459.28 rows=5322553884 width=2579) Workers Planned: 4 -> Nested Loop (cost=0.28..305122070.88 rows=1330638471 width=2579) -> Parallel Seq Scan on pts_100 pts (cost=0.00..75328.50 rows=1738350 width=40) -> Index Scan using pd_geom_idx on pd (cost=0.28..175.41 rows=7 width=2539) Index Cond: (geom && pts.geom) Filter: st_intersects(geom, pts.geom) The only quirk in this plan is that the nested loop join is being driven by the pts_100 table, which has 10 times the number of records as the pd table. The plan for a query against the pt_10 table also returns a parallel plan, but with pd as the driving table. EXPLAIN SELECT * FROM pd JOIN pts_10 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..85251180.90 rows=459202963 width=2579) Workers Planned: 3 -> Nested Loop (cost=0.29..39329884.60 rows=148129988 width=2579) -> Parallel Seq Scan on pd (cost=0.00..13800.30 rows=22430 width=2539) -> Index Scan using pts_10_gix on pts_10 pts (cost=0.29..1752.13 rows=70 width=40) Index Cond: (geom && pd.geom) Filter: st_intersects(pd.geom, geom) source: http://blog.cleverelephant.ca/2019/05/parallel-postgis-4.html
  9. 3 points
    Hello everyone ! This is a quick Python code which I wrote to batch download and preprocess Sentinel-1 images of a given time. Sentinel images have very good resolution and makes it obvious that they are huge in size. Since I didn’t want to waste all day preparing them for my research, I decided to write this code which runs all night and gives a nice image-set in following morning. import os import datetime import gc import glob import snappy from sentinelsat import SentinelAPI, geojson_to_wkt, read_geojson from snappy import ProductIO class sentinel1_download_preprocess(): def __init__(self, input_dir, date_1, date_2, query_style, footprint, lat=24.84, lon=90.43, download=False): self.input_dir = input_dir self.date_start = datetime.datetime.strptime(date_1, "%d%b%Y") self.date_end = datetime.datetime.strptime(date_2, "%d%b%Y") self.query_style = query_style self.footprint = geojson_to_wkt(read_geojson(footprint)) self.lat = lat self.lon = lon self.download = download # configurations self.api = SentinelAPI('scihub_username', 'scihub_passwd', 'https://scihub.copernicus.eu/dhus') self.producttype = 'GRD' # SLC, GRD, OCN self.orbitdirection = 'ASCENDING' # ASCENDING, DESCENDING self.sensoroperationalmode = 'IW' # SM, IW, EW, WV def sentinel1_download(self): global download_candidate if self.query_style == 'coordinate': download_candidate = self.api.query('POINT({0} {1})'.format(self.lon, self.lat), date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) elif self.query_style == 'footprint': download_candidate = self.api.query(self.footprint, date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) else: print("Define query attribute") title_found_sum = 0 for key, value in download_candidate.items(): for k, v in value.items(): if k == 'title': title_info = v title_found_sum += 1 elif k == 'size': print("title: " + title_info + " | " + v) print("Total found " + str(title_found_sum) + " title of " + str(self.api.get_products_size(download_candidate)) + " GB") os.chdir(self.input_dir) if self.download: if glob.glob(input_dir + "*.zip") not in [value for value in download_candidate.items()]: self.api.download_all(download_candidate) print("Nothing to download") else: print("Escaping download") # proceed processing after download is complete self.sentinel1_preprocess() def sentinel1_preprocess(self): # Get snappy Operators snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() # HashMap Key-Value pairs HashMap = snappy.jpy.get_type('java.util.HashMap') for folder in glob.glob(self.input_dir + "\*"): gc.enable() if folder.endswith(".zip"): timestamp = folder.split("_")[5] sentinel_image = ProductIO.readProduct(folder) if self.date_start <= datetime.datetime.strptime(timestamp[:8], "%Y%m%d") <= self.date_end: # add orbit file self.sentinel1_preprocess_orbit_file(timestamp, sentinel_image, HashMap) # remove border noise self.sentinel1_preprocess_border_noise(timestamp, HashMap) # remove thermal noise self.sentinel1_preprocess_thermal_noise_removal(timestamp, HashMap) # calibrate image to output to Sigma and dB self.sentinel1_preprocess_calibration(timestamp, HashMap) # TOPSAR Deburst for SLC images if self.producttype == 'SLC': self.sentinel1_preprocess_topsar_deburst_SLC(timestamp, HashMap) # multilook self.sentinel1_preprocess_multilook(timestamp, HashMap) # subset using a WKT of the study area self.sentinel1_preprocess_subset(timestamp, HashMap) # finally terrain correction, can use local data but went for the default self.sentinel1_preprocess_terrain_correction(timestamp, HashMap) # break # try this if you want to check the result one by one def sentinel1_preprocess_orbit_file(self, timestamp, sentinel_image, HashMap): start_time_processing = datetime.datetime.now() orb = self.input_dir + "\\orb_" + timestamp if not os.path.isfile(orb + ".dim"): parameters = HashMap() orbit_param = snappy.GPF.createProduct("Apply-Orbit-File", parameters, sentinel_image) ProductIO.writeProduct(orbit_param, orb, 'BEAM-DIMAP') # BEAM-DIMAP, GeoTIFF-BigTiff print("orbit file added: " + orb + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + orb) def sentinel1_preprocess_border_noise(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() border = self.input_dir + "\\bordr_" + timestamp if not os.path.isfile(border + ".dim"): parameters = HashMap() border_param = snappy.GPF.createProduct("Remove-GRD-Border-Noise", parameters, ProductIO.readProduct(self.input_dir + "\\orb_" + timestamp + ".dim")) ProductIO.writeProduct(border_param, border, 'BEAM-DIMAP') print("border noise removed: " + border + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + border) def sentinel1_preprocess_thermal_noise_removal(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() thrm = self.input_dir + "\\thrm_" + timestamp if not os.path.isfile(thrm + ".dim"): parameters = HashMap() thrm_param = snappy.GPF.createProduct("ThermalNoiseRemoval", parameters, ProductIO.readProduct(self.input_dir + "\\bordr_" + timestamp + ".dim")) ProductIO.writeProduct(thrm_param, thrm, 'BEAM-DIMAP') print("thermal noise removed: " + thrm + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + thrm) def sentinel1_preprocess_calibration(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() calib = self.input_dir + "\\calib_" + timestamp if not os.path.isfile(calib + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) calib_param = snappy.GPF.createProduct("Calibration", parameters, ProductIO.readProduct(self.input_dir + "\\thrm_" + timestamp + ".dim")) ProductIO.writeProduct(calib_param, calib, 'BEAM-DIMAP') print("calibration complete: " + calib + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + calib) def sentinel1_preprocess_topsar_deburst_SLC(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() deburst = self.input_dir + "\\dburs_" + timestamp if not os.path.isfile(deburst): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) deburst_param = snappy.GPF.createProduct("TOPSAR-Deburst", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(deburst_param, deburst, 'BEAM-DIMAP') print("deburst complete: " + deburst + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + deburst) def sentinel1_preprocess_multilook(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() multi = self.input_dir + "\\multi_" + timestamp if not os.path.isfile(multi + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) multi_param = snappy.GPF.createProduct("Multilook", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(multi_param, multi, 'BEAM-DIMAP') print("multilook complete: " + multi + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + multi) def sentinel1_preprocess_subset(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() subset = self.input_dir + "\\subset_" + timestamp if not os.path.isfile(subset + ".dim"): WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader') # converting shapefile to GEOJSON and WKT is easy with any free online tool wkt = "POLYGON((92.330290184197 20.5906091141114,89.1246637610338 21.6316051481971," \ "89.0330319081811 21.7802436586492,88.0086282580443 24.6678836192818,88.0857830091018 " \ "25.9156771178278,88.1771488779853 26.1480664053835,88.3759125970998 26.5942658997298," \ "88.3876586919721 26.6120432770312,88.4105534167129 26.6345128356038,89.6787084683935 " \ "26.2383305017275,92.348481691233 25.073636976939,92.4252199249342 25.0296592837972," \ "92.487261172615 24.9472465376954,92.4967290851295 24.902213855393,92.6799861774377 " \ "21.2972058618174,92.6799346581579 21.2853347419811,92.330290184197 20.5906091141114))" geom = WKTReader().read(wkt) parameters = HashMap() parameters.put('geoRegion', geom) subset_param = snappy.GPF.createProduct("Subset", parameters, ProductIO.readProduct(self.input_dir + "\\multi_" + timestamp + ".dim")) ProductIO.writeProduct(subset_param, subset, 'BEAM-DIMAP') print("subset complete: " + subset + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + subset) def sentinel1_preprocess_terrain_correction(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() terr = self.input_dir + "\\terr_" + timestamp if not os.path.isfile(terr + ".dim"): parameters = HashMap() # parameters.put('demResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('imgResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('pixelSpacingInMeter', 10.0) terr_param = snappy.GPF.createProduct("Terrain-Correction", parameters, ProductIO.readProduct(self.input_dir + "\\subset_" + timestamp + ".dim")) ProductIO.writeProduct(terr_param, terr, 'BEAM-DIMAP') print("terrain corrected: " + terr + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + terr) input_dir = "path_to_project_folder\Sentinel_1" start_date = '01Mar2019' end_date = '10Mar2019' query_style = 'footprint' # 'footprint' to use a GEOJSON, 'coordinate' to use a lat-lon footprint = 'path_to_project_folder\bd_bbox.geojson' lat = 26.23 lon = 88.56 sar = sentinel1_download_preprocess(input_dir, start_date, end_date, query_style, footprint, lat, lon, True) # proceed to download by setting 'True', default is 'False' sar.sentinel1_download() The geojson file is created from a very generalised shapefile of Bangladesh by using ArcGIS Pro. There are a lot of free online tools to convert shapefile to geojson and WKT. Notice that the code will skip download if the file is already there but will keep the processing on, so comment out line 197 when necessary. Updated the code almost completely. The steps of processing raw files of Sentinel-1 used here are not the most generic way, note that there are no authentic way for this. Since different research require different steps to prepare raw data, you will need to follow yours. Also published at clubgis.
  10. 3 points
    Details geological-geophysical aspects of groundwater treatment Discusses regulatory legislations regarding groundwater utilization Serves as a reference material for scientists in geology, geophysics and environmental studies
  11. 2 points
    for those like me who are not English mother tongue I recommend this site for translations (English - French - German - Italian - Spanish - Portuguese - Russian - Chinese - Japanese etc.)... fantastic and intuitive that is based on artificial intelligence https://www.deepl.com/ another interesting website https://www.linguee.com/
  12. 2 points
    A new set of 10 ArcGIS Pro lessons empowers GIS practitioners, instructors, and students with essential skills to find, acquire, format, and analyze public domain spatial data to make decisions. Described in this video, this set was created for 3 reasons: (1) to provide a set of analytical lessons that can be immediately used, (2) to update the original 10 lessons created by my colleague Jill Clark and I to provide a practical component to our Esri Press book The GIS Guide to Public Domain Data, and (3) to demonstrate how ArcGIS Desktop (ArcMap) lessons can be converted to Pro and to reflect upon that process. The activities can be found here. This essay is mirrored on the Esri GeoNet education blog and the reflections are below and in this video. Summary of Lessons: Can be used in full, in part, or modified to suit your own needs. 10 lessons. 64 work packages. A “work package” is a set of tasks focused on solving a specific problem. 370 guided steps. 29 to 42 hours of hands-on immersion. Over 600 pages of content. 100 skills are fostered, covering GIS tools and methods, working with data, and communication. 40 data sources are used, covering 85 different data layers. Themes covered: climate, business, population, fire, floods, hurricanes, land use, sustainability, ecotourism, invasive species, oil spills, volcanoes, earthquakes, agriculture. Areas covered: The Globe, and also: Brazil, New Zealand, the Great Lakes of the USA, Canada, the Gulf of Mexico, Iceland, the Caribbean Sea, Kenya, Orange County California, Nebraska, Colorado, and Texas USA. Aimed at university-level graduate and university or community college undergraduate student. Some GIS experience is very helpful, though not absolutely required. Still, my advice is not to use these lessons for students’ first exposure to GIS, but rather, in an intermediate or advanced setting. How to access the lessons: The ideal way to work through the lessons is in a Learn Path which bundle the readings of the book’s chapters, selected blog essays, and the hands-on activities.. The Learn Path is split into 3 parts, as follows: Solving Problems with GIS and public domain geospatial data 1 of 3: Learn how to find, evaluate, and analyze data to solve location-based problems through this set of 10 chapters and short essay readings, and 10 hands-on lessons: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path/ Solving Problems with GIS and public domain geospatial data 2 of 3: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path-2/ Solving Problems with GIS and public domain geospatial data 3 of 3: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path-3/ The Learn Paths allow for content to be worked through in sequence, as shown below: You can also access the lessons by accessing this gallery in ArcGIS Online, shown below. If you would like to modify the lessons for your own use, feel free! This is why the lessons have been provided in a zipped bundle as PDF files here and as MS Word DOCX files here. This video provides an overview. source: https://spatialreserves.wordpress.com/2020/05/14/10-new-arcgis-pro-lesson-activities-learn-paths-and-migration-reflections/
  13. 2 points
    Stop me if you’ve heard this before. DJI has introduced its latest enterprise powerhouse drone, the DJI Matrice 300 RTK. We learned a lot about the drone earlier this week due to a few huge leaks of specs, features, photos, and videos. But it’s worth looking at the drone again now that it’s official – and an incredible intro video. Also called the M300 RTK, this drone is an upgrade in every way over its predecessor, the M200 V2. That includes a very long flight time of 55 minutes, six-direction obstacle avoidance, and a doubled (6 pound) payload capability. That allows it to carry a range of powerful cameras, which we’ll get to in a bit. The drone is also built for weather extremes. IP45 weather sealing keeps out rain and dust. And a self-heating battery helps the drone to run in a broad range of temperatures, from -4 to 122 Fahrenheit. The DJI Matrice 300 RTK can fly up to 15 kilometers (9.3 miles) from its controller and still stream 1080p video back home. That video and other data can be protected using AES-256 encryption. The drone can also be flown by two co-pilots, with one able to take over for the other if any problem arises or a handoff scenario. A workhorse inspection drone All these capabilities are targeted to the DJI Matrice 300 RTK’s purpose as a drone for heavy-duty visual inspection and data collection work, such as surveys of power lines or railways. In fact, it incorporates many advanced camera features for the purpose. Smart inspection is a new set of features to optimize data collection. It includes live mission recording, which allows the drone to record every aspect of a flight, even camera settings. This allows workers to train a drone on an inspection mission that it will repeat again and again. With AI spot check, operators can mark the specific part of the photo, such as a transformer, that is the subject of inspection. AI algorithms compare that to what the camera sees on a future flight, so that it can frame the subject identically on every flight. An inspection drone is only as good as its cameras, and the M300 RTK offers some powerful options from DJI’s Zenmuse H20 series. The first option is a triple-camera setup. It includes a 20-megapixel, 23x zoom camera; a 12MP wide-angle camera; and a laser rangefinder that measures out to 1,200 meters (3,937 feet). The second option adds a radiometric thermal camera. TO make things simpler for operators, the drone provides a one-click capture feature that grabs videos or photos from three cameras at once, without requiring the operator to switch back and forth. Eyes and ears ready for danger With its flight time and range, the DJI Matrice 300 RTK could be flying some long, complex missions, easily beyond visual line of site (if its owner gets an FAA Part 107 waiver for that). This requires some solid safety measures. While the M200 V2 has front-mounted sensors, the M300 RTK has sensors in six directions for full view of the surroundings. The sensors can register obstacles up to 40 meters (98 feet) away. Like all new DJI drones, the M300 RTK also features the company’s AirSense technology. An ADS-B receiver picks up signals from manned aircraft that are nearby and alerts the drone pilot of their location. It’s been quite a few weeks for DJI. On April 27, it debuted its most compelling consumer drone yet, the Mavic Air 2. Now it’s showing off its latest achievement at the other end of the drone spectrum with the industrial grade Matrice 300 RTK. These two, very different drones help illustrate the depth of product that comes from the world’s biggest drone maker. And the company doesn’t show signs of slowing down, despite the COVID-19 economic crisis. Next up, we suspect, will be a revision to its semi-pro quadcopter line in the firm of a Mavic 3. It is available at DJI.It’s been quite a few weeks for DJI. On April 27, it debuted its most compelling consumer drone yet, the Mavic Air 2. Now it’s showing off its latest achievement at the other end of the drone spectrum with the industrial grade Matrice 300 RTK. These two, very different drones help illustrate the depth of product that comes from the world’s biggest drone maker. And the company doesn’t show signs of slowing down, despite the COVID-19 economic crisis. Next up, we suspect, will be a revision to its semi-pro quadcopter line in the firm of a Mavic 3. It is available at DJI. source: https://dronedj.com/2020/05/07/dji-matrice-300-rtk-drone-official/
  14. 2 points
    @intertronic, thanks for your input. I found a solution that suite my case better due to the fact that we are using both version of QGIS and also because I was looking for interoperability. Therefore I have decided to use QSphere. Most probably not well known around the globe. https://qgis.projets.developpement-durable.gouv.fr/projects/qsphere GUI quiete ugly but at least is doing the job. 😉 darksabersan
  15. 2 points
    DRONE MAKER DJI announced an update to its popular Mavic Air quadcopter today. The Mavic Air 2 will cost $799 when it ships to US buyers in late May. That's the same price as the previous Mavic Air model, so the drone stays as DJI's mid-range option between its more capable Mavic 2 and its smaller, cheaper Mavic Mini. The Mavic Air 2 is still plenty small, but the new version has put on some weight. DJI says that testing and consumer surveys suggested that most people don't mind lugging a few extra grams in exchange for a considerable upgrade in flight time and, presumably, better handling in windy conditions. Even better, thanks to a new rotor design and other aerodynamic improvements, DJI is claiming the Mavic Air 2 can remain aloft for 34 minutes—a big jump from the 21 minutes of flight time on the original Mavic Air. The Camera Eye he big news in this update is the new larger imaging sensor on the drone's camera. The Mavic Air 2's camera ships with a half-inch sensor, up from the 1 2/3-inch sensor found in the previous model. That should mean better resolution and sharper images, especially because the output specs haven't changed much. The new camera is still outputting 12-megapixel stills, but now has a bigger sensor to fill that frame with more detail. There's also a new composite image option that joins together multiple single shots into a large, 48-megapixel image. On the video side, there's some exciting news. The Mavic Air 2 is DJI's first drone to offer 4K video at 60 frames per second and 120 Mbps—previous DJI drones topped out at 30 fps when shooting in full 4K resolution. There are also slow-motion modes that slow down footage to four times slower than real life (1080p at 120 fps), or eight-times slower (1080 at 240 fps). Combine those modes with the more realistic contrast you get with the HDR video standard, and you have considerably improved video capabilities in a sub-$1,000 drone. More interesting in some ways is DJI's increasing forays into computational photography, which the company calls Smart Photo mode. Flip on Smart Photo and the Mavic Air 2 will do scene analysis, tap its machine intelligence algorithm and automatically choose between a variety of photo modes. There's a scene recognition mode where the Mavic Air 2 sets the camera up to best capture one of a variety of scenarios you're likely to encounter with drone photography, including blue skies, sunsets, snow, grass, and trees. In each case, exposure is adjusted to optimize tone and detail. The second Smart Photo mode is dubbed Hyperlight, which handles low-light situations. To judge by DJI's promo materials, this is essentially an HDR photography mode specifically optimized for low-light scenes. It purportedly cuts noise and produces more detailed images. The final smart mode is HDR, which takes seven images in rapid succession, the combines elements of each to make a final image with a higher dynamic range. One last note about the camera: The shape of the camera has changed, so if you have any lenses or other accessories for previous DJI drones, they won't attach to the Air 2. Automatic Flight for the People If you dig through older YouTube videos there's a ton of movies that play out like this: unbox new drone, head outside, take off, tree gets closer, closer, closer, black screen. Most of us just aren't that good at flying, and the learning curve can be expensive and steep. Thankfully drone companies began automating away most of what's difficult about piloting a quadcopter, and DJI is no exception. The company has added some new automated flight tricks to the Air's arsenal. DJI's Active Track has been updated to version 3.0, which brings better subject recognition algorithms and some new 3D mapping tricks to make it easier to automatically track people through a scene, keeping the camera on the subject as the drone navigates overhead to stay with them. DJI claims the Point of Interest mode—which allows you to select an object and fly around it in a big circle while the camera stays pointed at the subject—is better at tracking some of the objects that previous versions struggled with, like vehicles or even people. The most exciting new flight mode is Spotlight, which comes from DJI's high-end Inspire drone used by professional photographers and videographers to carry their DSLR cameras into the sky. Similar to the Active Track mode, Spotlight keeps the camera pointed a moving subject. But while Active Track automates the drone's flight, the new Spotlight mode allows the human pilot to retain control of the flight path for more complex shots. Finally, the range of the new Mavic Air 2 has been improved, and it can now wander an impressive six miles away from the pilot in ideal conditions. The caveat here is that you should always maintain visual contact with your drone for safety reasons. However, you aren't going to be able to see the Mavic Air 2 when it's two miles away, let alone six. Despite a dearth of competitors, DJI continues to put out new drones and improve its lineup as it progresses. The Mavic Air 2 looks like an impressive update to what was already one of our favorite drones, especially considering several features—the 60 fps 4K video and 34 minute flight time—even best those found on the more expensive Mavic 2 Pro. links: https://www.dji.com/id/mavic-air-2
  16. 2 points
    I like drones but just got more interested in this,
  17. 2 points
    Harvard Online Courses Advance your career. Pursue your passion. Keep learning. links: https://online-learning.harvard.edu/CATALOG/FREE
  18. 2 points
  19. 2 points
    Saw a similar news last month - Using Machine Learning to “Nowcast” Precipitation in High Resolution by Google. The result seemed pretty good. Here, A visualization of predictions made over the course of roughly one day. Left: The 1-hour HRRR prediction made at the top of each hour, the limit to how often HRRR provides predictions. Center: The ground truth, i.e., what we are trying to predict. Right: The predictions made by our model. Our predictions are every 2 minutes (displayed here every 15 minutes) at roughly 10 times the spatial resolution made by HRRR. Notice that we capture the general motion and general shape of the storm. The two method seem similar.
  20. 2 points
    With Huawei basically blocked from using Google services and infrastructure, the firm has taken steps to replace Google Maps on its hardware by signing a partnership with TomTom to provide maps, navigation, and traffic data to Huawei apps. Reuters reports that Huawei is entering this partnership with TomTom as the mapping tech company is based in the Netherlands — therefore side-stepping the bans on working with US firms. TomTom will provide the Chinese smartphone manufacturer with mapping, live traffic data, and software on smartphones and tablets. TomTom spokesman Remco Meerstra confirmed to Reuters that the deal had been closed some time ago but had not been made public by the company. This comes as TomTom unveiled plans to move away from making navigation hardware and will focus more heavily on offering software services — making this a substantial step for TomTom and Huawei. While TomTom doesn’t quite match the global coverage and update speed of Google Maps, having a vital portion of it filled by a dedicated navigation and mapping firm is one step that might appease potential global Huawei smartphone buyers. There is no denying the importance of Google app access outside of China but solid replacements could potentially make a huge difference — even more so if they are recognizable by Western audiences. It’s unclear when we may see TomTom pre-installed on Huawei devices but we are sure that this could be easily added by way of an OTA software update. The bigger question remains if people are prepared to switch from Google Maps to TomTom for daily navigation. resource: https://9to5google.com/2020/01/20/huawei-tomtom/
  21. 2 points
    January 3, 2020 - Recent Landsat 8 Safehold Update On December 19, 2019 at approximately 12:23 UTC, Landsat 8 experienced a spacecraft constraint which triggered entry into a Safehold. The Landsat 8 Flight Operations Team recovered the satellite from the event on December 20, 2019 (DOY 354). The spacecraft resumed nominal on-orbit operations and ground station processing on December 22, 2019 (DOY 356). Data acquired between December 22, 2019 (DOY 356) and December 31, 2019 (DOY 365) exhibit some increased radiometric striping and minor geometric distortions (see image below) in addition to the normal Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) alignment offset apparent in Real-Time tier data. Acquisitions after December 31, 2019 (DOY 365) are consistent with pre-Safehold Real-Time tier data and are suitable for remote sensing use where applicable. All acquisitions after December 22, 2019 (DOY 356) will be reprocessed to meet typical Landsat data quality standards after the next TIRS Scene Select Mirror (SSM) calibration event, scheduled for January 11, 2020. Landsat 8 Operational Land Imager acquisition on December 22, 2019 (path 148/row 044) after the spacecraft resumed nominal on-orbit operations and ground station processing. This acquisition demonstrates increased radiometric striping and minor geometric distortions observed in all data acquired between December 22, 2019 and December 31, 2019. All acquisitions after December 22, 2019 will be reprocessed on January 11, 2020 to achieve typical Landsat data quality standards. Data not acquired during the Safehold event are listed below and displayed in purple on the map (click to enlarge). Map displaying Landsat 8 scenes not acquired from Dec 19-22, 2019 Path 207 Rows 160-161 Path 223 Rows 60-178 Path 6 Rows 22-122 Path 22 Rows 18-122 Path 38 Rows 18-122 Path 54 Rows 18-214 Path 70 Rows 18-120 Path 86 Rows 24-110 Path 102 Rows 19-122 Path 118 Rows 18-185 Path 134 Rows 18-133 Path 150 Rows 18-133 Path 166 Rows 18-222 Path 182 Rows 18-131 Path 198 Rows 18-122 Path 214 Rows 34-122 Path 230 Rows 54-179 Path 13 Rows 18-122 Path 29 Rows 20-232 Path 45 Rows 18-133 After recovering from the Safehold successfully, data acquired on December 20, 2019 (DOY 354) and from most of the day on December 21, 2019 (DOY 355) were ingested into the USGS Landsat Archive and marked as "Engineering". These data are still being assessed to determine if they will be made available for download to users through all USGS Landsat data portals. source: https://www.usgs.gov/land-resources/nli/landsat/january-3-2020-recent-landsat-8-safehold-update
  22. 2 points
    just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
  23. 2 points
    one of my favorite image hosting, , this is their announcement : Rest in Peace TinyPic
  24. 2 points
    not necessary to excel environment but : https://github.com/orbisgis/h2gis/wiki/4.2-LibreOffice
  25. 2 points
    This is an interesting topic from not quite an old webpage. I was searching for some use case of blockchain in geospatial context and found this. The contexts still challenging, but very noteworthy. What is a blockchain and how is it relevant for geospatial applications? (By Jonas Ellehauge, awesome map tools, Norway) A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Blockchain technology is already being used in some geospatial applications, as explained here. As an immutable registry for transactions of digital tokens, blockchain is suitable for geospatial applications involving data that is sensitive or a public good, autonomous devices and smart contracts. Use Cases The use cases are discussed further below. I have given a few short talks about this topic at various conferences, most recently at the international FOSS4G conference in Bonn, Germany, 2016. Public-good data Open Data is Still Centralised Data Over the past two decades, I have seen how ‘public-good’ geospatial data has generally become much easier to get hold of, having originally been very inaccessible to most people. Gradually, the software to display and process the data became cheaper or even free, but the data itself – data that people had already paid for through their taxes – remained inaccessible. Some national mapping institutions and cadastres began distributing the data via the internet, although mostly with a price tag. Only in recent years have a few countries in Europe made public map data freely accessible. In the meantime, projects like OpenStreetMap have emerged in order to meet people’s need for open data. It is hardly a surprise, then, that a myriad of new apps, mock-ups and business cases emerge in a region shortly after data is made available to the public there. Truly Public Open Data One of the reasons that this data has remained inaccessible for so long is that it is collected and distributed through a centralised organisation. A small group of people manage enormous repositories of geospatial data and can restrict or grant access to it. As I see it, this is where blockchain and related technologies like IPFS can enable people to build systems where the data is inherently public, no one controls it, anyone can access it, and anyone can review the full history of contributions to the data. Would it be free of charge to use data from such a system? Who would pay for it? I guess time will tell which business model is the most sustainable in that respect. OpenStreetMap is free to use, it is immensely popular and yet people gladly contribute to it – so who pays the cost for OSM? Bear in mind that there’s no such thing as ‘free data’. For example, the ‘free’ open data in Denmark today is paid for through taxes. So, even if it would cost a little to use the blockchain-based data, that wouldn’t be so different from now – just that no one would be able to restrict access to the data, plus the open nature of competing nodes and contributors will minimise the costs. Autonomous Devices & Apps Uber and Airbnb are examples of consumer applications that rely on geospatial data and processing. They represent a centralised approach where the middleman owns and controls the data and charges a significant fee for connecting clients and providers with each other. If such apps were replaced by distributed peer-to-peer systems, they could be cheaper and give their users full control of their data. There is already such an alternative to Uber called Arcade.City. A peer-to-peer market app like OpenBazar may also benefit from geospatial components with regards to e.g. search and logistics. Such autonomous apps may currently have to rely on third parties for their geospatial components – e.g. Google Maps, Mapbox, OpenStreetMap, etc. With access to truly publicly distributed data as described above, such apps would be even more reliable and cheaper to run. An autonomous device such as a drone or a self-driving car inherently runs an autonomous application, so these two concepts are heavily intertwined. There’s no doubt that self-navigating cars and drones will be a growing market in the near future. Uber and Tesla have big ambitions regarding cars, drones are being designed for delivery of consumer products (Amazon), and drone-based emergency response (drone defibrillator) and imaging (automatic selfie drone ‘Lily’) applications are emerging. Again, distributed peer-to-peer apps could cut out the middleman and reliance on third parties for their navigation and other geospatial components. Land Ownership What is Property? After some years in the GIS software industry, I realised that a very large part of my work revolved around cadastres/parcels and other administrative borders plus technical base maps featuring roads, buildings, etc. In view of my background in physical geography I thought that was pretty boring stuff and I dreamt about creating maps and applications that involved temperatures, wind, currents, salinity, terrain models, etc., because it felt more ‘real’. I gradually realised that something about administrative data was nagging me – as if it didn’t actually represent reality. Lately, I have taken an interest in philosophy about human interaction, voluntary association and self-ownership. It turns out that property is a moral, philosophical concept of assets acquired through voluntary transactions or homesteading. This perspective stretches at least as far back as John Locke in the 17th century. Such justly acquired property is reality, whereas law, governance services and computer code are systems that attempt to model reality. When such systems don’t fit reality, the system is wrong and should be dismissed, possibly adjusted or replaced. Land Ownership For the vast majority of people in many developing countries, there is no mapping of parcels or proof of ownership available to the actual landowners. Christiaan Lemmen, an expert on cadastres, has experience from field work to map parcels in developing countries such as Nigeria, Liberia, etc., where corruption can be a big challenge within land administration. In his experience, however, people mostly agree on who owns what in their local communities. These people often have a need for proof of identity and proof of ownership for their justly acquired land in order to generate wealth, invest in their future and prevent fraud – while they often face problems with inefficient, expensive or corrupt government services. Ideally, we could build inexpensive, reliable and easy-to-use blockchain-based systems that will enable people to map and register their land together with their neighbours – without involving any government officials, lawyers or other middlemen. Geodesic Grids It has been suggested to use geodesic grids of discrete cells to register land ownership on a blockchain. Such cells can be shaped, e.g. as squares, triangles, pentagons, hexagons, etc., and each cell has a unique identifier. In a traditional cadastral system, parcels are represented with flexible polygons, which allows users to register any possible shape of a parcel. Although a grid of discrete cells doesn’t allow such flexible polygons, it has an advantage in this case: each digital token on the blockchain (let’s call it a ‘Landcoin’) can represent one unique cell in the grid. Hence, whoever owns a particular Landcoin owns the corresponding piece of land. Owning such a Landcoin means possessing the private encryption key that controls it – which is how other cryptocurrencies work. In order to represent complex and high-resolution geometries, it is preferable to use a grid which is infinitely sub-divisible so that ever-smaller triangles, hexagons or squares, etc., can be tied together to represent any piece of land. A digital token can also be infinitely sub-divisible. For comparison, the smallest unit of a Bitcoin is currently a 100-millionth – aka a ‘Satoshi’. If needed, the core software could be upgraded to support even smaller units. What is a Blockchain? A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Safer – because no one controls all the data (known as root privilege in existing databases). Each entry has its own pair of public and private encryption keys and only the holder of the private key can unlock the entry and transfer it to someone else. Immutable – because each block of entries (added every 1-10 minutes) carries a unique hash ‘fingerprint’ of the previous block. Hence, older blocks cannot be tampered with. Cheaper – because anyone can set up a node and get paid in digital tokens (e.g. Bitcoin or Ether) for hosting a blockchain. This ensures that competition between nodes will minimise the cost of hosting it. It also saves the costs of massive security layers that otherwise apply to servers with sensitive data – this is because of the no-root-privilege security model and, with old entries being immutable, there’s little need to protect them. Resilient – because there is no single point of failure, there’s practically nothing to attack. In order to compromise a blockchain, you’d have to hack each individual user one by one in order to get hold of their private encryption keys that give access to that user’s data only. Another option is to run over 50% of the nodes, which is virtually impossible and economically impractical. Transparency and accountability – the fact that existing entries cannot be tampered with makes a blockchain a transparent source of truth and history for your application. The public nature of it makes it easy to hold people accountable for their activities. Control – the immutable and no-root-privilege character puts each user in full control of his/her own data using the private encryption keys. This leads to real peer-to-peer interaction without any middleman and without an administrator that can deny users access to their data. Trustless – because each user fully controls his/her own data, users can safely interact without knowing or trusting each other and without any trusted third parties. Smart Contracts and DAPPs A blockchain can be more than a passive registry of entries or transactions. The original Bitcoin blockchain supports limited scripting allowing for programmable transactions and smart contracts – e.g. where specified criteria must be fulfilled leading to transactions automatically taking place. Possibly the most popular alternative to Bitcoin is Ethereum, which is a multi-purpose blockchain with a so-called ‘Turing complete’ programming interface, which allows developers to create virtually any imaginable application on this platform. Such applications are referred to as decentralised autonomous applications (DAPPs) and are virtually impossible for third parties to stop or censor. [1] IFPS IPFS is a distributed file system and web protocol, which can complement or even replace HTTP. Instead of referring to files by their location on a host or IP address, it refers to files by their content. This means that when requested, IPFS will return the content from the nearest possible or even multiple computers rather than from a central server. That could be on the computer next to you, on your local network or somewhere in the neighbourhood. Jonas Ellehauge is an expert on geospatial software, GIS and web development, enthusiastic about open source, Linux and UI/UX. Ellehauge is passionate about science, philosophy, entrepreneurship, economy and communication. His background in physical geography provides extensive knowledge of spatial analyses and spatial problem solving.
  26. 2 points
    multifunction casing. you can run 3d games and grating cheese for your hamburger. excelent thought apple as always LOL
  27. 2 points
    We are all already familiar with GPS navigation outdoors and what wonders it does not only for our everyday life, but also for business operations. Outdoor maps, allowing for navigation via car or by foot, have long helped mankind to find even the most remote and hidden places. Increased levels of efficiency, unprecedented levels of control over operational processes, route planning, monitoring of deliveries, safety and security regulations and much more have been made possible. Some places are, however, harder to reach and navigate than others. For instance, places like big indoor areas – universities, hospitals, airports, convention centers or factories, among others. Luckily, that struggle is about to become a thing of the past. So what’s the solution for navigating through and managing complex indoor buildings? Indoor Mapping and Visualization with ArcGIS Indoors The answer is simple – indoor mapping. Indoor mapping is a revolutionary concept that visualizes an indoor venue and spatial data on a digital 2D or 3D map. Showing places, people and assets on a digital map enables solutions such as indoor positioning and navigation. These, in turn, allow for many different use cases that help companies optimize their workflows and efficiencies. Mobile Navigation and Data The idea behind this solution is the same as outdoor navigation, only instead it allows you to see routes and locate objects and people in a closed environment. As GPS signals are not available indoors, different technology solutions based on either iBeacons, WiFi or lighting are used to create indoor maps and enable positioning services. You can plan a route indoors from point A to point B with customized pins and remarks, analyze whether facilities are being used to their full potential, discover new business opportunities, evaluate user behaviors and send them real-time targeted messages based on their location, intelligently park vehicles, and the list goes on! With the help of geolocation, indoor mapping stores and provides versatile real-time data on everything that is happening indoors, including placements and conditions of assets and human movements. This allows for a common operating picture, where all stakeholders share the same level of information and insights into internal processes. Having a centralized mapping system enables effortless navigation through all the assets and keeps facility managers updated on the latest changes, which ultimately improves business efficiency. Just think how many operational insights can be received through visualizations of assets on your customized map – you can monitor and analyze the whole infrastructure and optimize the performance accordingly. How to engage your users/visitors at the right time and place? What does it take to improve security management? Are the workflow processes moving seamlessly? Answers to those and many other questions can be found in an indoor mapping solution. Interactive indoor experiences are no longer a thing of the future, they are here and now. source: https://www.esri.com/arcgis-blog/products/arcgis-indoors/mapping/what-is-indoor-mapping/
  28. 1 point
    Hi, I wanted to say hello to all. I am making maps. I am using Illustrator plus MaPublisher mostly, Photoshop, Manifold and Global Mapper. I prefer to use many small programs instead one big monster ArcGIS, but it is a question of taste I think. Besides that also Erdas and eCognition sometimes. Thank you all for great software here.
  29. 1 point
    Hi remoteguy ! Welcome on board and enjoy your time with us and the community. darksabersan
  30. 1 point
    GRASS GIS was, for a long time, something I dismissed as ‘too complex’ for my everyday geospatial operations. I formulated any number of excuses to work around the software and could not be convinced it had practical use in my daily work. It was ‘too hard to set-up’, ‘never worked well with QGIS’, and ‘made my scripting processes a nightmare’. In this example we will: part 1: 1. Download a small piece of elevation data from the LINZ Data Service 2. Build a GRASS environment to process these data 3. Build a BASH script to process the catchments 4. Import the elevation into the GRASS environment 5. Perform some basic GRASS operations (fill and watershed) 6. Export raster format for viewing 7. Export the vector catchments to shapefile part 2: 1. Creating multiple watershed boundaries of different sizes with GRASS and using a basic loop in BASH for the process. 2. Clipping the original raster by the watershed boundaries using GDAL and SQL with a basic loop in BASH. links: part 1: https://xycarto.com/2020/05/03/basic-grass-gis-with-bash/ part 2: https://xycarto.com/2020/05/05/basic-grass-gis-with-bash-plus-gdal/ source code: https://github.com/xycarto/xycarto_code/tree/master/scripts/grass/GRASS_BASH_blog
  31. 1 point
    Changes in ocean circulation may have caused a shift in Atlantic Ocean ecosystems not seen for the past 10,000 years, new analysis of deep-sea fossils has revealed. This is the striking finding of a new study led by a research group I am part of at UCL, funded by the ATLAS project and published in the journal Geophysical Research Letters. The shift has likely already led to political tensions as fish migrate to colder waters. The climate has been quite stable over the 12,000 years or so since the end of the last Ice Age, a period known as the Holocene. It is thought that this stability is what allowed human civilisation to really get going. In the ocean, the major currents are also thought to have been relatively stable during the Holocene. These currents have natural cycles, which affect where marine organisms can be found, including plankton, fish, seabirds and whales. Yet climate change in the ocean is becoming apparent. Tropical coral reefs are bleaching, the oceans becoming more acidic as they absorb carbon from the atmosphere, and species like herring or mackerel are moving towards the poles. But there still seems to be a prevailing view that not much has happened in the ocean so far – in our minds the really big impacts are confined to the future. Looking into the past To challenge this point of view, we had to look for places where seabed fossils not only covered the industrial era in detail, but also stretched back many thousands of years. And we found the right patch of seabed just south of Iceland, where a major deep sea current causes sediment to pile up in huge quantities. To get our fossil samples we took cores of the sediment, which involves sending long plastic tubes to the bottom of the ocean and pushing them into the mud. When pulled out again, we were left with a tube full of sediment that can be washed and sieved to find fossils. The deepest sediment contains the oldest fossils, while the surface sediment contains fossils that were deposited within the past few years. One of the simplest ways of working out what the ocean was like in the past is to count the different species of tiny fossil plankton that can be found in such sediments. Different species like to live in different conditions. We looked at a type called foraminifera, which have shells of calcium carbonate. Identifying them is easy to do using a microscope and small paintbrush, which we use when handling the fossils so they don't get crushed. A recent global study showed that modern foraminifera distributions are different to the start of the industrial era. Climate change is clearly already having an impact. Similarly, the view that modern ocean currents are like those of the past couple of thousand years was challenged by our work in 2018, which showed that the overturning "conveyor belt" circulation was at its weakest for 1,500 years. Our new work builds on this picture and suggests that modern North Atlantic surface circulation is different to anything seen in the past 10,000 years – almost the whole Holocene. The effects of the unusual circulation can be found across the North Atlantic. Just south of Iceland, a reduction in the numbers of cold-water plankton species and an increase in the numbers of warm-water species shows that warm waters have replaced cold, nutrient-rich waters. We believe that these changes have also led to a northward movement of key fish species such as mackerel, which is already causing political headaches as different nations vie for fishing rights. Further north, other fossil evidence shows that more warm water has been reaching the Arctic from the Atlantic, likely contributing to melting sea ice. Further west, a slowdown in the Atlantic conveyor circulation means that waters are not warming as much as we would expect, while furthest west close to the US and Canada the warm gulf stream seems to be shifting northwards which will have profound consequences for important fisheries. One of the ways that these circulation systems can be affected is when the North Atlantic gets less salty. Climate change can cause this to happen by increasing rainfall, increasing ice melt, and increasing the amount of water coming out of the Arctic Ocean. Melting following the peak of the Little Ice Age in the mid 1700s may have triggered an input of freshwater, causing some of the earliest changes that we found, with modern climate change helping to propel those changes beyond the natural variability of the Holocene. We still don't know what has ultimately caused these changes in ocean circulation. But it does seem that the ocean is more sensitive to modern climate changes than previously thought, and we will have to adapt. source: https://www.sciencealert.com/fossils-reveal-our-ocean-is-changing-in-a-ways-it-hasn-t-for-10-000-years
  32. 1 point
    or another swiss/french site : https://www.camptocamp.com/en/jobs/
  33. 1 point
    For those who were looking for a style editor like Mapbox for Esri basemaps, here is one. This is an interactive basemap style WYSIWYG editor readily usable with ArcGIS Developer account. How it works Start by selecting an existing Esri vector basemap (e.g. World Street Map or Light Gray Canvas) and then just begin customizing the layer colors and labels from there. You can edit everything from fill and text symbols to fonts, halos, patterns, transparency, and zoom level visibility. When you are finished, the styles are saved as an item in ArcGIS Online with a unique ID. The Map Viewer or any custom application can then reference the item to display the basemap. Design Tools The editor makes styling easy by allowing you to style many layers at once or by allowing you to search for individual layers of interest. Here are some of options available: Quick Edit – select all layers and style them at once Edit by Color – select and replace a color for one or more layers Edit Layer Styles – search for one or more layers to style Layer Editor – click the map or the layer list to perform detailed editing on a layer Quick edits Layer editor Try it! To start customizing a basemap sign into the ArcGIS for Developers website and click “New Basemap Style”. There are also new ArcGIS DevLabs for styling a vector tile basemap and displaying a styled vector basemap in your application. For more inspiration visit this showcase of some custom styles we have created. ArcGIS Vector Tile Style Editor
  34. 1 point
    did you mean for function area is Spherical function area?
  35. 1 point
    Take a look on this: links: https://www.cambridge.org/core/what-we-publish/textbooks untested, maybe you need make free user account first they have nice collection of engineering and geosciences books https://www.cambridge.org/core/what-we-publish/textbooks/listing?aggs[productSubject][filters]=F470FBF5683D93478C7CAE5A30EF9AE8 https://www.cambridge.org/core/what-we-publish/textbooks/listing?aggs[productSubject][filters]=CCC62FE56DCC1D050CA1340C1CCF46F5
  36. 1 point
    The British Geological Survey (BGS) has amassed one of the world’s premier collections of geologic samples. Housed in three enormous warehouses in Nottingham, U.K., it contains about 3 million fossils gathered over more than 150 years at thousands of sites across the country. But this data trove “was not really very useful to anybody,” says Michael Stephenson, a BGS paleontologist. Notes about the samples and their associated rocks “were sitting in boxes on bits of paper.” Now, that could change, thanks to a nascent international effort to meld earth science databases into what Stephenson and other backers are describing as a “geological Google.” This network of earth science databases, called Deep-time Digital Earth (DDE), would be a one-stop link allowing earth scientists to access all the data they need to tackle big questions, such as patterns of biodiversity over geologic time, the distribution of metal deposits, and the workings of Africa’s complex groundwater networks. It’s not the first such effort, but it has a key advantage, says Isabel Montañez, a geochemist at University of California, Davis, who is not involved in the project: funding and infrastructure support from the Chinese government. That backing “will be critical to [DDE’s] success given the scope of the proposed work,” she says. In December 2018, DDE won the backing of the executive committee of the International Union of Geological Sciences, which said ready access to the collected geodata could offer “insights into the distribution and value of earth’s resources and materials, as well as hazards—while also providing a glimpse of the Earth’s geological future.” At a meeting this week in Beijing, 80 scientists from 40 geoscience organizations including BGS and the Russian Geological Research Institute are discussing how to get DDE up and running by the time of the International Geological Congress in New Delhi in March 2020. DDE grew out of a Chinese data digitization scheme called the Geobiodiversity Database (GBDB), initiated in 2006 by Chinese paleontologist Fan Junxuan of Nanjing University. China had long-running efforts in earth sciences, but the data were scattered among numerous collections and institutions. Fan, who was then at the Chinese Academy of Sciences’s Nanjing Institute of Geology and Paleontology, organized GBDB around the stacks of geologic strata called sections and the rocks and fossils in each stratum. Norman MacLeod, a paleobiologist at the Natural History Museum in London who is advising DDE, says GBDB has succeeded where similar efforts have stumbled. In the past, he says, volunteer earth scientists tried to do nearly everything themselves, including informatics and data management. GBDB instead pays nonspecialists to input reams of data gleaned from earth science journals covering Chinese findings. Then, paleontologists and stratigraphers review the data for accuracy and consistency, and information technology specialists curate the database and create software to search and analyze the data. Consistent funding also contributed to GBDB’s success, MacLeod says. Although it started small, Fan says GBDB now runs on “several million” yuan per year. Earth scientists outside China began to use GBDB, and it became the official database of the International Commission on Stratigraphy in 2012. BGS decided to partner with GBDB to lift its data “from the page and into cyberspace,” as Stephenson puts it. He and other European and Chinese scientists then began to wonder whether the informatics tools developed for GBDB could help create a broader union of databases. “Our idea is to take these big databases and make them use the same standards and references so a researcher could quickly link them to do big science that hasn’t been done before,” he says. The Beijing meeting aims to finalize an organizational structure for DDE. Chinese funding agencies are putting up $75 million over 10 years to get the effort off the ground, Fan says. That level of support sets DDE apart from other cyberinfrastructure efforts “that are smaller in scope and less well funded,” Montañez says. Fan hopes DDE will also attract international support. He envisions nationally supported DDE Centers of Excellence that would develop databases and analytical tools for particular interests. Suzhou, China, has already agreed to host the first of them, which will also house the DDE secretariat. DDE backers say they want to cooperate with other geodatabase programs, such as BGS’s OneGeology project, which seeks to make geologic maps of the world available online. But Mohan Ramamurthy, project director of the U.S. National Science Foundation–funded EarthCube project, sees little scope for collaboration with his effort, which focuses on current issues such as climate change and biosphere-geosphere interactions. “The two programs have very different objectives with little overlap,” he says. Fan also hopes individual institutions will contribute, by sharing data, developing analytical tools, and encouraging their scientists to participate. Once earth scientists are freed of the drudgery of combing scattered collections, he says, they will have time for more important challenges, such as answering “questions about the evolution of life, materials, geography, and climate in deep time.” source: https://www.sciencemag.org/news/2019/02/earth-scientists-plan-meld-massive-databases-geological-google
  37. 1 point
    The Gridded Population of the World (GPW) collection, now in its fourth version (GPWv4), models the distribution of human population (counts and densities) on a continuous global raster surface. Since the release of the first version of this global population surface in 1995, the essential inputs to GPW have been population census tables and corresponding geographic boundaries. The purpose of GPW is to provide a spatially disaggregated population layer that is compatible with data sets from social, economic, and Earth science disciplines, and remote sensing. It provides globally consistent and spatially explicit data for use in research, policy-making, and communications. For GPWv4, population input data are collected at the most detailed spatial resolution available from the results of the 2010 round of Population and Housing Censuses, which occurred between 2005 and 2014. The input data are extrapolated to produce population estimates for the years 2000, 2005, 2010, 2015, and 2020. A set of estimates adjusted to national level, historic and future, population predictions from the United Nation’s World Population Prospects report are also produced for the same set of years. The raster data sets are constructed from national or subnational input administrative units to which the estimates have been matched. GPWv4 is gridded with an output resolution of 30 arc-seconds (approximately 1 km at the equator). The nine data sets of the current release are collectively referred to as the Revision 11 (or v4.11) data sets. In this release, several issues identified in the 4.10 release of December 2017 have been corrected as follows: The extent of the final gridded data has been updated to a full global extent. Erroneous no data pixels in all of the gridded data were recoded as 0 in cases where census reported known 0 values. The netCDF files were updated to include the Mean Administrative Unit Area layer, the Land Area and Water Area layers, and two layers indicating the administrative level(s) of the demographic characteristics input data. The National Identifier Grid was reprocessed to remove artefacts from inland water. In addition, two attributes were added to indicate the administrative levels of the demographic characteristics input data, and the data set zip files were corrected to include the National Identifier Polygons shapefile. Two new classes (Total Land Pixels and Ocean Pixels) were added to the Water Mask. The administrative level names of the Greece Administrative Unit Centre Points were translated to English. Separate rasters are available for population counts and population density consistent with national censuses and population registers, or alternative sources in rare cases where no census or register was available. All estimates of population counts and population density have also been nationally adjusted to population totals from the United Nation’s World Population Prospects: The 2015 Revision. In addition, rasters are available for basic demographic characteristics (age and sex), data quality indicators, and land and water areas. A vector data set of the centrepoint locations (centroids) for each of the input administrative units and a raster of national level numeric identifiers are included in the collection to share information about the input data layers. The raster data sets are now available in ASCII (text) format as well as in GeoTIFF format. Five of the eight raster data sets are also available in netCDF format. In addition, the native 30 arc-second resolution data were aggregated to four lower resolutions (2.5 arc-minute, 15 arc-minute, 30 arc-minute, and 1 degree) to enable faster global processing and support of research communities that conduct analyses at these resolutions. All of these resolutions are available in ASCII and GeoTIFF formats. NetCDF files are available at all resolutions except 30 arc-second. All spatial data sets in the GPWv4 collection are stored in geographic coordinate system (latitude/longitude).
  38. 1 point
    really nice, is it possible to leverage into forecast? that would be interesting
  39. 1 point
    I have a project with autocad files fire up my Workstation Laptop (Dell Precission 5510) and load CAD data. Holly cr*p, this software run like a snail, 🤣 try to disable Hardware acceleration, yeah much better experience, but still laggy as old Arcgis Pro beta 😂 searching around and found this article: https://knowledge.autodesk.com/support/autocad/troubleshooting/caas/sfdcarticles/sfdcarticles/Optimize-Performance-within-Windows-7-Environments.html?_ga=2.205082898.303799305.1579712200-1066991414.1579712200 didnt have time to try all the suggestion yet, but, hey all GISArea members, do you use Autocad? how to improve your CAD Experience? share with me, 😉
  40. 1 point
    Interesting articles : North-South displacement field - 1999 Hector-Mine earthquake, California In complement to seismological records, the knowledge of the ruptured fault geometry and co-seismic ground displacements are key data to investigate the mechanics of seismic rupture. This information can be retrieved from sub-pixel correlation of optical images. We are investigating the use of SPOT (Satellite pour l'Observation de la Terre) satellites images. The technique developed here is attractive due to the operational status of a number of optical imaging programs and the availability of archived data. However, uncertainties on the imaging system itself and on its attitude dramatically limit its potential. We overcome these limitations by applying an iterative corrective process allowing for precise image registration that takes advantage of the availability of accurate Digital Elevation Models with global coverage (SRTM). This technique is thus a valuable complement to SAR interferometry which provides accurate measurements kilometers away from the fault but generally fails in the near-fault zone where the fringes get noisy and saturated. Comparison between the two methods is briefly discussed, with application on the 1992 Landers earthquake in California (Mw 7.3). Applications of this newly developped technique are presented: the horizontal co-seismic displacement fields induced by the 1999 Hector-Mine earthquake in California (Mw 7.1) and by the 1999 Chichi earthquake in Taiwan (Mw 7.5) have recently been retrieved using archive images. Data obtained can be downloaded (see further down) Latest Study Cases Sub-pixel correlation of optical images Following is the flow chart of the technique that as been developped. It allows for precise orthorectification and coregistration of the SPOT images. More details about the optimization process will be given in the next sections. Understanding the disparities measured from Optical Images Differences in geometry between the two images to be registered: - Uncertainties on attitudes parameters (roll, pitch, yaw) - Inaccuracy on orbital parameters (position, velocity) - Incidence angle differences + topography uncertainties (parallax effect) - Optical and Electronic biases (optical aberrations, CCD misalignment, focal length, sampling period, etc… ) » May account for disparities up to 800 m on SPOT 1,2,3,4 images; 50m for SPOT 5 (see [3]). Ground deformations: - Earthquakes, land slides, etc… » Typically subpixel scale: ranging from 0 to 10 meters. Temporal decorrelation: - Changes in vegetation, rivers, changes in urban areas, etc… » Correlation is lost: add noise to the measurement – up to 1m. » Ground deformations are largely dominated by the geometrical artifacts. Precise registration: geometrical corrections SPOT (Systeme pour l'Observation de la Terre) satellites are pushbroom imaging systems ([1],[2]): all optical parts remain fixed during acquisition and the scanning is accomplished by the forward motion of the spacecraft. Each line in the image is then acquired at a different time and submitted to the different variations of the platform. The orthorectification process consists in modeling and correcting these variations to produce cartographic distortion free images. It is then possible to accurately register images and look for their disparities using correlation techniques. Attitude variations (roll, pitch, and yaw) during the scanning process have to be integrated in the image model (see [1],[2]). Errors in correcting the satellite look directions will result in projecting the image pixels at the wrong location on the ground: important parallax artifacts will be seen when measuring displacement between two images. Exact pixel projection on the ground is achieved through an optimization algorithm that iteratively corrects the look directions by selecting ground control points. An accurate topography model has to be used. What parameters to optimize? - Initial attitudes values of the platform (roll, pitch, yaw), - Constant drift of the attitude values along the image acquisition, - Focal length (different value depending on the instrument , HRG1 – HRG2), - Position and velocity. How to optimize: Iterative algorithm using a set of GCPs (Ground Control Points). GCPs are generated automatically with a subpixel accuracy: they result from a correlation between an orthorectified reference frame and the rectified image whose parameters are to be optimized. A two stages procedure: - One of the image is optimized with respect to the shaded DEM (GCP are generated from the correlation with the shaded DEM). The DEM is then considered as the ground truth. No GPS points are needed. - The other image is then optimized using another set of GCP resulting from the correlation with the first image (co-registration). Measuring co-seismic deformation with InSAR, a comparison A fringe represents a near-vertical displacement of 2.8 cm SAR interferogram (ERS): near-vertical component of the ground displacement induced by the 1992 Landers earthquake [Massonnet et al., 1993]. No organized fringes in a band within 5-10 km of the fault trace: displacement sufficiently large that the change in range across a radar pixel exceeds one fringe per pixel, coherence is lost. http://earth.esa.int/applications/data_util/ndis/equake/land2.htm » SAR interferometry is not a suitable technique to measure near fault displacements The 1992 Landers earthquake revisited: Profile in offsets and elastic modeling show good agreement From: [6] - Measuring earthqakes from optical satellite images, Van Puymbroeck, Michel, Binet, Avouac, Taboury - Applied Optics Vol. 39, No 20, 10 July 2000 Other applications of the technique, see [4], [5]. » Fault ruptures can be imaged from this technique Applying the precise rectification algorithm + subpixel correlation: The 1999 Hector-Mine earthquake (Mw 7.1, California) Obtaining the Data (available in ENVI file Format. Load banbs as gray scale images. Bands are: N/S offsets, E/W offsets, SNR): Raw and filtered results: HectorMine.zip Pre-earthquake image: SPOT 4, acquisition date: 08-17-1998 Ground resolution: 10m Post-earthquake image: SPOT 2, acquisition date: 08-18-2000 Ground resolution: 10m Offsets measured from correlation: Correspond to sub-pixel offsets in the raw images. Correlation windows: 32 x 32 pixels 96m between two measurements So far we have: - A precise mapping of the rupture zone: the offsets field have a resolution of 96 m, - Measurements with a subpixel accuracy (displacement of at most 10 meters), - Improved the global georeferencing of the images with no GPS measurements, - Improved the processing time since the GCP selection is automatic, - Suppressed the main attitude artifacts. The profiles do not show any long wavelength deformations (See Dominguez et al. 2003) We notice: - Linear artifacts in the along track direction due to CCD misalignments, Schematic of a DIVOLI showing four CCD linear arrays. - Some topographic artifacts: the image resolution is higher than the DEM one, - Several decorrelations due to rivers and clouds, - High frequency noise due to the noise sensitivity of the Fourier correlator (See Van Puymbroeck et al.). Conclusion Subpixel correlation technique has been improved to overcome most of its limitations: » Precise rectification and co-registration of the images, » No more topographic effects (depending on the DEM resolution), » No need for GPS points – independent and automatic algorithm, » Better spatial resolution (See Van Puymbroeck et al.) To be improved: » Stripes due to the CCD’s misalignment, » high frequency noise from the correlator, » Process images with corrupted telemetry. » The subpixel correlation technique appears to be a valuable complement to SAR interferometry for ground deformation measurements. References: [1] SPOT 5 geometry handbook: ftp://ftp.spot.com/outgoing/SPOT_docs/geometry_handbook/S-NT-73-12-SI.pdf [2] SPOT User's Handbook Volume 1 - Reference Manual: ftp://ftp.spot.com/outgoing/SPOT_docs/SPOT_User's Handbook/SUHV1RM.PDF [3] SPOT 5 Technical Summary ftp://ftp.spot.com/outgoing/SPOT_docs/technical/spot5_tech_slides.ppt [4] Dominguez S., J.P. Avouac, R. Michel Horizontal co-seismic deformation of the 1999 Chi-Chi earthquake measured from SPOT satellite images: implications for the seismic cycle along the western foothills of Central Taiwan, J. Geophys. Res., 107, 10 1029/2001JB00482, 2003. [5] Michel, R. et J.P., Avouac, Deformation due to the 17 August Izmit earthquake measured from SPOT images, J. Geophys. Res., 107, 10 1029/2000JB000102, 2002. [6] Van Puymbroeck, N., Michel, R., Binet, R., Avouac, J.P. and Taboury, J. Measuring earthquakes from optical satellite images, Applied Optics Information Processing, 39, 23, 3486–3494, 2000. Publications: Leprince S., Barbot S., Ayoub F., Avouac, J.P. Automatic, Precise, Ortho-rectification and Co-registration for Satellite Image Correlation, Application to Seismotectonics. To be submitted. Conferences: F Levy, Y Hsu, M Simons, S Leprince, J Avouac. Distribution of coseismic slip for the 1999 ChiChi Taiwan earthquake: New data and implications of varying 3D fault geometry. AGU 2005 Fall meeting, San Francisco. M Taylor, S Leprince, J Avouac. A Study of the 2002 Denali Co-seismic Displacement Using SPOT Horizontal Offsets, Field Measurements, and Aerial Photographs. AGU 2005 Fall meeting, San Francisco. Y Kuo, F Ayoub, J Avouac, S Leprince, Y Chen, J H Shyu, Y Kuo. Co-seismic Horizontal Ground Slips of 1999 Chi-Chi Earthquake (Mw 7.6) Deduced From Image-Comparison of Satellite SPOT and Aerial Photos. AGU 2005 Fall meeting, San Francisco. source: http://www.tectonics.caltech.edu/geq/spot_coseis/
  41. 1 point
    please elaborate, what do you plan on using remote sensing data to make some animation? please add the details simple example for their functions can be see here: http://animove.org/wp-content/uploads/2019/04/Daniel_Palacios_animate_moveVis.html
  42. 1 point
    Dapat data shp untuk peta multirawan se Indonesia. silahkan dicek https://drive.google.com/file/d/1anG5xcA9uMo1P9jLeppBvEXpaExJsLhk/view untested, lupa dapet darimana link ini
  43. 1 point
    Look forward to play mario and legend of zelda on those device 😁
  44. 1 point
    This is the real challenges for Huawei how to convince their user to use this new operating system. But nothing is impossible...
  45. 1 point
    Hi Darksabersan, the links are dead, could you reactivate and perhaps upload to a different site like Mega.nz, please? thanks
  46. 1 point
    TopoMapCreator (beta) A set of GIS tools that helps creating topographic map TopoMapCreatorThe TopoMapCreator consists of of 5 Programs: MapCreator, GeoToolsCmd, TopoMap, EcwToMobile and ExtendedMapCreator. More information for example about how to install it, you find under TopoMapCreator. Now read, what the 5 Programs are doing: 1. ExtendedMapCreatorExtendedMapCreator is a Desktop-Program, that creates "Topographic Maps" from OSM, NASA and ESA. You simply define a map extent by dragging over a browsable word map, click on start and wait till the GeoTIFF, ECW, GALILEO, ORUXMAPS or NAVIMAP files got created. ExtendedMapCreator is based on the Mapnik-Renderer, nevertheless all data downloading and processing is fully automatic. Click on ExtendedMapCreator to read more about the Program! 2. MapCreatorMapCreator is a GIS toolset. The tools have the common goal to create Topographic Maps. Currently it consists of 10 tools: The GeoreferencingTool georeferences scanned map series. The EcwHillshaderTool adds hillshades to a map. The SrtmHillshadesTool creates hillshades. The EcwToMobileTool converts a map to a Smartphone App Format. The GeonamesToShapeTool creates a shapefile from a GeoNames file. The ShapeToOsmTool creates an OSM file from shapefiles. The WarpEcwTool warps (reprojects) huge maps. The RussianMapsCreatorTool downloads and processes Russian maps. The QgisToEcwTool makes a Print-Screen of a qGis view. The USGSTopoMapTool downloads and processes USGS maps. Click on any of the tools to know more about it! 3. GeoToolsCmdGeoToolsCmd provides the same GIS toolset as the MapCreator, but accessible over the Command-Prompt. With GeoToolsCmd it is possible to write batch files. 4. TopoMapTopoMap is simple Desktop-Program to download specific Maps. 5. EcwToMobileEcwToMobile is a simple Desktop-Program to convert an ECW file to a Mobile App Format. The program is redundant to the EcwToMobileTool. darksabersan.
  47. 1 point
    The Klencke Atlas is one of the world's biggest: it measures 176 x 231 cm when open. It takes its name from Joannes Klencke, who presented it to Charles II on his restoration to the British thrones in 1660. Its size and its 40 or so large wall maps from the Golden Age of Dutch mapmaking were supposed to suggest that it contained all the knowledge in the world. At another level, it was a bribe intended to spur the King into granting Klencke and his associates trading privileges and titles. Charles, who was a map enthusiast, appreciated the gift. He placed the atlas with his most precious possessions in his cabinet of curiosities, and Klencke was knighted. Later generations have benefited too. The binding has protected the wall maps which have survived for us to enjoy - unlike the vast majority of other wall maps which, exposed to light, heat and dirt when hung on walls, have crumbled away. visit : https://www.bl.uk/collection-items/klencke-atlas
  48. 1 point
    It might not be a very good idea to use ArcGIS but somehow we can use this software for tree counting! My method is quite simple: 1. Get a piece of hi-res image from Google Earth 2. Open the image in ArcGIS 3. Make sure toolbar "Image Classification" is checked (turned on). 4. From "Image Classification" toolbar, select "Classification" --> "Iso Cluster Unsupervised Classification" command. 5. Chose "2" for "Number of Classes" in the dialogbox. By doing it, you will have only trees (or something else similar) and non-tree objects in the result. Run it and see the result (below). C'mon, it also shows "Google Earth" trademark on the result (LOL). 6. Now go to ArcGIS ToolBox, select "Conversion" --> "From Raster" --> "Raster to Polygon". In the dialog box, check to "Simplify Polygion (optional)" checkbox. 7. The result is as below 8. Mask the area where you want to count the trees, then count the number of polygons within that mask. Bingo!!!! If you want to do it better, you can do some pre-processing steps for your image. Also, when you have the polygon layer, you can try to simplify the layer again (using Eliminate, Integrate... functions) before counting trees. Enjoy ESRI.
  49. 1 point
    look at maxmax website, they do NIR conventions on cameras that can fit on drones. Did a canon compact with them 100%. Try imageJ (free software), run the NDVI settings you will get some data. Or use a blue filter on your camera, you will get usable images for NDVI. I got good data with dessert palms from RGB images in eCognition, but you have o play with the analysis a bit! Having a NIR layer will make life easy for you. Good Luck
  50. 1 point
    If you have a very-high resolution image (< 1m ) and it has only 3 spectral bands in the visible spectrum (Red, Green, Blue - RGB) with no additional infrared bands (NIR), the best option to extract trees is to use an OBIA method ( > eCognition is one of the best OBIA softwares). So, for your task, me I would use eCognition.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.