Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/19/2019 in all areas

  1. 4 points
    Here is an interesting review: http://www.50northspatial.org/uav-image-processing-software-photogrammetry/ 😉😊
  2. 3 points
    Interesting video on How Tos: WebOpenDroneMap is a friendly Graphical User Interfase (GUI) of OpenDroneMap. It enhances the capabilities of OpenDroneMap by providing a easy tool for processing drone imagery with bottoms, process status bars, and a new way to store images. WebODM allows to work by projects, so the user can create different projects and process the related images. As a whole, WebODM in Windows is a implementation of PostgresSQL, Node, Django and OpenDroneMap and Docker. The software instalation requires 6gb of disk space plus Docker. It seem huge but it is the only way to process drone imagery in Windows using just open source software. We definitely see a huge potential of WebODM for the image processing, therefore we have done this tutorial for the installation and we will post more tutorial for the application of WebODM with drone images. For this tutorial you need Docker Toolbox installed on your computer. You can follow this tutorial to get Docker on your pc: https://www.hatarilabs.com/ih-en/tutorial-installing-docker You can visit the WebODM site on GitHub: https://github.com/OpenDroneMap/WebODM Videos The tutorial was split in three short videos. Part 1 https://www.youtube.com/watch?v=AsMSoWAToxE Part 2 https://www.youtube.com/watch?v=8GKx3fz0qgE Part 3 https://www.youtube.com/watch?v=eCZFzaXyMmA
  3. 3 points
    7th International Conference on Computer Science and Information Technology (CoSIT 2020) January 25 ~ 26, 2020, Zurich, Switzerland https://cosit2020.org/ Scope & Topics 7th International Conference on Computer Science and Information Technology (CoSIT 2020) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field. Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe · Geographical Information Systems/ Global Navigation Satellite Systems (GIS/GNSS) Paper Submission Authors are invited to submit papers through the conference Submission system. Here’s where you can reach us : [email protected] or [email protected]
  4. 3 points
    The first thing to do before mapping is to set up the camera parameters. Before to set up camera parameters, recommended resetting the all parameters on camera first. To set camera parameters manually need to set to manual mode. Image quality: Extra fine Shutter speed: to remove blur from photo shutter speed should be set for higher value. 1200–1600 is recommended. Higher the shutter speed reduce image quality . if there is blur in the image increase shutter speed ISO: lower the ISO higher image quality. ISO between 160–300 is recommended. if there is no blur but image quality is low, reduce ISO. Focus: Recommended to set the focus manually on the ground before a flight. Direct camera to an object which is far, and slightly increase the focus, you will see on camera screen that image sharpness changes by changing the value. Set the image sharpness at highest. (slide the slider close to infinity point on the screen you will see the how image sharpness changes by sliding) White balance: recommended to set to auto. On surveying mission Sidelap, Overlap, Buffer have to be set higher to get better quality surveying result. First set the RESOLUTION which you would like to get for your surveying project. When you change resolution it changes flight altitude and also effects the coverage in a single flight. Overlap: 70% This will increase the number of photos taken during each flight line. The camera should be capable to capture faster. Sidelap: recommended 70% Flying with higher side-lap between each line of the flight is a way to get more matches in the imagery, but it also reduces the coverage in a single flight Buffer: 12% Buffer increases the flight plane to get more images from borders. It will improve the quality of the map source: https://dronee.aero/blogs/dronee-pilot-blog/few-things-to-set-correctly-to-get-high-quality-surveying-results
  5. 3 points
    The GeoforGood Summit 2019 drew its curtains close on 19 Sep 2019 and as a first time attendee, I was amazed to see the number of new developments announced at the summit. The summit — being a first of its kind — combined the user summit and the developers summit into one to let users benefit from the knowledge of new tools and developers understand the needs of the user. Since my primary focus was on large scale geospatial modeling, I attended the workshops and breakout sessions related to Google Earth Engine only. With that, let’s look at 3 new exciting developments to hit Earth Engine Updated documentation on machine learning Documentation really? Yes! As an amateur Earth Engine user myself, my number one complaint of the tool has been its abysmal quality of documentation spread between its app developers site, Google Earth’s blog, and their stack exchange answers. So any updates to the documentation is welcome. I am glad that the documentation has been updated to help the ever-exploding user base of geospatial data scientists interested in implementing machine learning and deep learning models. The documentation comes with its own example Colab notebooks. The Example notebooks include supervised classification, unsupervised classification, dense neural network, convolutional neural network, and deeplearning on Google Cloud. I found that these notebooks were incredibly useful to me to get started as there are quite a few non-trivial data type conversions ( int to float32 and so on) in the process flow. Earth Engine and AI Platform Integration Nick Clinton and Chris Brown jointly announced the much overdue Earth Engine + Google AI Platform integration. Until now, users were essentially limited to running small jobs on Google Colab’s virtual machine (VM) and hoping that the connection with the VM doesn’t time out (which usually lasts for about 4 hours). Other limitations include lack of any task monitoring or queuing capabilities. Not anymore! The new ee.Model() package let’s users communicate with a Google Cloud server that they can spin up based on their own needs. Needless to say, this is a HUGE improvement over the previous primitive deep learning support provided on the VM. Although it was free, one could simply not train, validate, predict, and deploy any model larger than a few layers. It had to be done separately on the Google AI Platform once the .TFRecord objects were created in their Google bucket. With this cloud integration, that task has been simplified tremendously by letting users run and test their models right from the Colab environment. The ee.Model() class comes with some useful functions such as ee.Model.fromAIPlatformPredictor() to make predictions on Earth Engine data directly from your model sitting on Google Cloud. Lastly, since your model now sits in the AI Platform, you can cheat and use your own models trained offline to predict on Earth Engine data and make maps of its output. Note that your model must be saved using tf.contrib.saved_model format if you wish to do so. The popular Keras function model.save_model('model.h5') is not compatible with ee.Model(). Moving forward, it seems like the team plans to stick to the Colab Python IDE for all deep learning applications. However, it’s not a death blow for the loved javascript code editor. At the summit, I saw that participants still preferred the javascript code editor for their non-neural based machine learning work (like support vector machines, random forests etc.). Being a python lover myself, I too go to the code editor for quick visualizations and for Earth Engine Apps! I did not get to try out the new ee.Model() package at the summit but Nick Clinton demonstrated a notebook where a simple working example has been hosted to help us learn the function calls. Some kinks still remain in the development— like limiting a convolution kernel to only 144 pixels wide during prediction because of “the way earth engine communicates with cloud platform” — but he assured us that it will be fixed soon. Overall, I am excited about the integration because Earth Engine is now a real alternative for my geospatial computing work. And with the Earth Engine team promising more new functions in the ee.Model() class, I wonder if companies and labs around the world will start migrating their modeling work to Earth Engine. Cooler Visualizations! Matt Hancher and Tyler Erickson displayed some new functionality related to visualizations and I found that it made it vastly simpler to make animated visuals. With ee.ImageCollection.getVideoThumbURL() function, you can create your own animated gifs within a few seconds! I tried it on a bunch of datasets and the speed of creating the gifs was truly impressive. Say bye to exporting each iteration of a video to your drive because these gifs appear right at the console using the print() command! Shown above is an example of global temperature forecast by time from the ‘NOAA/GFS0P25’ dataset. The code for making the gif can be found here. The animation is based on the example shown in the original blog post by Michael DeWitt and I referred to this gif-making tutorial on the developers page to make it. I did not get to cover all the new features and functionality introduced at the summit. For that, be on the lookout for event highlights on Google Earth’s blog. Meanwhile, you can check out the session resources from the summit for presentations and notebooks on topics that you are interested in. Presentation and resources Published in Medium
  6. 3 points
    found this interesting tutorial : For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us. Initially, the results were pretty bad. With PostgreSQL 10, it was possible to force some parallel queries by jimmying with global cost parameters, but nothing would execute in parallel out of the box. With PostgreSQL 11, we got support for parallel aggregates, and those tended to parallelize in PostGIS right out of the box. However, parallel scans still required some manual alterations to PostGIS function costs, and parallel joins were basically impossible to force no matter what knobs you turned. With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of: Parallel sequence scans, Parallel aggregates, and Parallel joins!! TL;DR: PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions. What Changed With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour. For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively. Setup In order to run these tests yourself, you will need: PostgreSQL 12 PostGIS 3.0 You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best. The setup instructions show where to download the Canadian polling division data used for the testing: pd a table of ~70K polygons pts a table of ~70K points pts_10 a table of ~700K points pts_100 a table of ~7M points We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes. When max_parallel_workers_per_gather is set to 0, parallel plans are not an option. max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2. Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker. show max_worker_processes; show max_parallel_workers; show max_parallel_workers_per_gather; Aggregates Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT Sum(ST_Area(geom)) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. Scans The simplest spatial parallel scan adds a spatial function to the target list or filter clause. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT ST_Area(geom) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11. Gather (cost=1000.00..27361.20 rows=69534 width=8) Workers Planned: 3 -> Parallel Seq Scan on pd (cost=0.00..19407.80 rows=22430 width=8) Joins Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get: SET max_parallel_workers_per_gather = 4; EXPLAIN SELECT * FROM pd JOIN pts_100 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..837378459.28 rows=5322553884 width=2579) Workers Planned: 4 -> Nested Loop (cost=0.28..305122070.88 rows=1330638471 width=2579) -> Parallel Seq Scan on pts_100 pts (cost=0.00..75328.50 rows=1738350 width=40) -> Index Scan using pd_geom_idx on pd (cost=0.28..175.41 rows=7 width=2539) Index Cond: (geom && pts.geom) Filter: st_intersects(geom, pts.geom) The only quirk in this plan is that the nested loop join is being driven by the pts_100 table, which has 10 times the number of records as the pd table. The plan for a query against the pt_10 table also returns a parallel plan, but with pd as the driving table. EXPLAIN SELECT * FROM pd JOIN pts_10 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..85251180.90 rows=459202963 width=2579) Workers Planned: 3 -> Nested Loop (cost=0.29..39329884.60 rows=148129988 width=2579) -> Parallel Seq Scan on pd (cost=0.00..13800.30 rows=22430 width=2539) -> Index Scan using pts_10_gix on pts_10 pts (cost=0.29..1752.13 rows=70 width=40) Index Cond: (geom && pd.geom) Filter: st_intersects(pd.geom, geom) source: http://blog.cleverelephant.ca/2019/05/parallel-postgis-4.html
  7. 3 points
    Hello everyone ! This is a quick Python code which I wrote to batch download and preprocess Sentinel-1 images of a given time. Sentinel images have very good resolution and makes it obvious that they are huge in size. Since I didn’t want to waste all day preparing them for my research, I decided to write this code which runs all night and gives a nice image-set in following morning. import os import datetime import gc import glob import snappy from sentinelsat import SentinelAPI, geojson_to_wkt, read_geojson from snappy import ProductIO class sentinel1_download_preprocess(): def __init__(self, input_dir, date_1, date_2, query_style, footprint, lat=24.84, lon=90.43, download=False): self.input_dir = input_dir self.date_start = datetime.datetime.strptime(date_1, "%d%b%Y") self.date_end = datetime.datetime.strptime(date_2, "%d%b%Y") self.query_style = query_style self.footprint = geojson_to_wkt(read_geojson(footprint)) self.lat = lat self.lon = lon self.download = download # configurations self.api = SentinelAPI('scihub_username', 'scihub_passwd', 'https://scihub.copernicus.eu/dhus') self.producttype = 'GRD' # SLC, GRD, OCN self.orbitdirection = 'ASCENDING' # ASCENDING, DESCENDING self.sensoroperationalmode = 'IW' # SM, IW, EW, WV def sentinel1_download(self): global download_candidate if self.query_style == 'coordinate': download_candidate = self.api.query('POINT({0} {1})'.format(self.lon, self.lat), date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) elif self.query_style == 'footprint': download_candidate = self.api.query(self.footprint, date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) else: print("Define query attribute") title_found_sum = 0 for key, value in download_candidate.items(): for k, v in value.items(): if k == 'title': title_info = v title_found_sum += 1 elif k == 'size': print("title: " + title_info + " | " + v) print("Total found " + str(title_found_sum) + " title of " + str(self.api.get_products_size(download_candidate)) + " GB") os.chdir(self.input_dir) if self.download: if glob.glob(input_dir + "*.zip") not in [value for value in download_candidate.items()]: self.api.download_all(download_candidate) print("Nothing to download") else: print("Escaping download") # proceed processing after download is complete self.sentinel1_preprocess() def sentinel1_preprocess(self): # Get snappy Operators snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() # HashMap Key-Value pairs HashMap = snappy.jpy.get_type('java.util.HashMap') for folder in glob.glob(self.input_dir + "\*"): gc.enable() if folder.endswith(".zip"): timestamp = folder.split("_")[5] sentinel_image = ProductIO.readProduct(folder) if self.date_start <= datetime.datetime.strptime(timestamp[:8], "%Y%m%d") <= self.date_end: # add orbit file self.sentinel1_preprocess_orbit_file(timestamp, sentinel_image, HashMap) # remove border noise self.sentinel1_preprocess_border_noise(timestamp, HashMap) # remove thermal noise self.sentinel1_preprocess_thermal_noise_removal(timestamp, HashMap) # calibrate image to output to Sigma and dB self.sentinel1_preprocess_calibration(timestamp, HashMap) # TOPSAR Deburst for SLC images if self.producttype == 'SLC': self.sentinel1_preprocess_topsar_deburst_SLC(timestamp, HashMap) # multilook self.sentinel1_preprocess_multilook(timestamp, HashMap) # subset using a WKT of the study area self.sentinel1_preprocess_subset(timestamp, HashMap) # finally terrain correction, can use local data but went for the default self.sentinel1_preprocess_terrain_correction(timestamp, HashMap) # break # try this if you want to check the result one by one def sentinel1_preprocess_orbit_file(self, timestamp, sentinel_image, HashMap): start_time_processing = datetime.datetime.now() orb = self.input_dir + "\\orb_" + timestamp if not os.path.isfile(orb + ".dim"): parameters = HashMap() orbit_param = snappy.GPF.createProduct("Apply-Orbit-File", parameters, sentinel_image) ProductIO.writeProduct(orbit_param, orb, 'BEAM-DIMAP') # BEAM-DIMAP, GeoTIFF-BigTiff print("orbit file added: " + orb + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + orb) def sentinel1_preprocess_border_noise(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() border = self.input_dir + "\\bordr_" + timestamp if not os.path.isfile(border + ".dim"): parameters = HashMap() border_param = snappy.GPF.createProduct("Remove-GRD-Border-Noise", parameters, ProductIO.readProduct(self.input_dir + "\\orb_" + timestamp + ".dim")) ProductIO.writeProduct(border_param, border, 'BEAM-DIMAP') print("border noise removed: " + border + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + border) def sentinel1_preprocess_thermal_noise_removal(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() thrm = self.input_dir + "\\thrm_" + timestamp if not os.path.isfile(thrm + ".dim"): parameters = HashMap() thrm_param = snappy.GPF.createProduct("ThermalNoiseRemoval", parameters, ProductIO.readProduct(self.input_dir + "\\bordr_" + timestamp + ".dim")) ProductIO.writeProduct(thrm_param, thrm, 'BEAM-DIMAP') print("thermal noise removed: " + thrm + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + thrm) def sentinel1_preprocess_calibration(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() calib = self.input_dir + "\\calib_" + timestamp if not os.path.isfile(calib + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) calib_param = snappy.GPF.createProduct("Calibration", parameters, ProductIO.readProduct(self.input_dir + "\\thrm_" + timestamp + ".dim")) ProductIO.writeProduct(calib_param, calib, 'BEAM-DIMAP') print("calibration complete: " + calib + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + calib) def sentinel1_preprocess_topsar_deburst_SLC(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() deburst = self.input_dir + "\\dburs_" + timestamp if not os.path.isfile(deburst): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) deburst_param = snappy.GPF.createProduct("TOPSAR-Deburst", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(deburst_param, deburst, 'BEAM-DIMAP') print("deburst complete: " + deburst + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + deburst) def sentinel1_preprocess_multilook(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() multi = self.input_dir + "\\multi_" + timestamp if not os.path.isfile(multi + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) multi_param = snappy.GPF.createProduct("Multilook", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(multi_param, multi, 'BEAM-DIMAP') print("multilook complete: " + multi + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + multi) def sentinel1_preprocess_subset(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() subset = self.input_dir + "\\subset_" + timestamp if not os.path.isfile(subset + ".dim"): WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader') # converting shapefile to GEOJSON and WKT is easy with any free online tool wkt = "POLYGON((92.330290184197 20.5906091141114,89.1246637610338 21.6316051481971," \ "89.0330319081811 21.7802436586492,88.0086282580443 24.6678836192818,88.0857830091018 " \ "25.9156771178278,88.1771488779853 26.1480664053835,88.3759125970998 26.5942658997298," \ "88.3876586919721 26.6120432770312,88.4105534167129 26.6345128356038,89.6787084683935 " \ "26.2383305017275,92.348481691233 25.073636976939,92.4252199249342 25.0296592837972," \ "92.487261172615 24.9472465376954,92.4967290851295 24.902213855393,92.6799861774377 " \ "21.2972058618174,92.6799346581579 21.2853347419811,92.330290184197 20.5906091141114))" geom = WKTReader().read(wkt) parameters = HashMap() parameters.put('geoRegion', geom) subset_param = snappy.GPF.createProduct("Subset", parameters, ProductIO.readProduct(self.input_dir + "\\multi_" + timestamp + ".dim")) ProductIO.writeProduct(subset_param, subset, 'BEAM-DIMAP') print("subset complete: " + subset + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + subset) def sentinel1_preprocess_terrain_correction(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() terr = self.input_dir + "\\terr_" + timestamp if not os.path.isfile(terr + ".dim"): parameters = HashMap() # parameters.put('demResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('imgResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('pixelSpacingInMeter', 10.0) terr_param = snappy.GPF.createProduct("Terrain-Correction", parameters, ProductIO.readProduct(self.input_dir + "\\subset_" + timestamp + ".dim")) ProductIO.writeProduct(terr_param, terr, 'BEAM-DIMAP') print("terrain corrected: " + terr + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + terr) input_dir = "path_to_project_folder\Sentinel_1" start_date = '01Mar2019' end_date = '10Mar2019' query_style = 'footprint' # 'footprint' to use a GEOJSON, 'coordinate' to use a lat-lon footprint = 'path_to_project_folder\bd_bbox.geojson' lat = 26.23 lon = 88.56 sar = sentinel1_download_preprocess(input_dir, start_date, end_date, query_style, footprint, lat, lon, True) # proceed to download by setting 'True', default is 'False' sar.sentinel1_download() The geojson file is created from a very generalised shapefile of Bangladesh by using ArcGIS Pro. There are a lot of free online tools to convert shapefile to geojson and WKT. Notice that the code will skip download if the file is already there but will keep the processing on, so comment out line 197 when necessary. Updated the code almost completely. The steps of processing raw files of Sentinel-1 used here are not the most generic way, note that there are no authentic way for this. Since different research require different steps to prepare raw data, you will need to follow yours. Also published at clubgis.
  8. 3 points
    News release April 1, 2019, Saint-Hubert, Quebec – The Canadian Space Agency and the Canada Centre for Mapping and Earth Observation are making RADARSAT-1 synthetic aperture radar images of Earth available to researchers, industry and the public at no cost. The 36,500 images are available through the Government of Canada's Earth Observation Data Management System. The RADARSAT-1 dataset is valuable for testing and developing techniques to reveal patterns, trends and associations that researchers may have missed when RADARSAT-1 was in operation. Access to these images will allow Canadians to make comparisons over time, for example, of sea ice cover, forest growth or deforestation, seasonal changes and the effects of climate change, particularly in Canada's North. This image release initiative is part of Canada's Open Government efforts to encourage novel Big Data Analytic and Data Mining activities by users. Canada's new Space Strategy places priority on acquiring and using space-based data to support science excellence, innovation and economic growth. Quick facts The RADARSAT Constellation Mission, scheduled for launch in May 2019, builds on the legacy of RADARSAT-1 and RADARSAT-2, and on Canada's expertise and leadership in Earth observation from space. RADARSAT-1 launched in November 1995. It operated for 17 years, well over its five-year life expectancy, during which it orbited Earth 90,828 times, travelling over 2 billion kilometres. It was Canada's first Earth observation satellite. RADARSAT-1 images supported relief operations in 244 disaster events. RADARSAT-2 launched in December 2007 and is still operational today. This project represents a unique collaboration between government and industry. MDA, a Maxar company, owns and operates the satellite and ground segment. The Canadian Space Agency helped to fund the construction and launch of the satellite. It recovers this investment through the supply of RADARSAT-2 data to the Government of Canada during the lifetime of the mission. Users can download these images through the Earth Observation Data Management System of the Canada Centre for Mapping and Earth Observation, a division of Natural Resources Canada (NRCan). NRCan is responsible for the long-term archiving and distribution of the images as well as downlinking of satellite data at its ground stations. source: https://www.canada.ca/en/space-agency/news/2019/03/open-data-over-36000-historical-radarsat-1-satellite-images-of-the-earth-now-available-to-the-public.html
  9. 3 points
    premium web application for ArcGIS Enterprise 10.7 that provides users with tools and capabilities in a project-based environment that streamlines image analysis and structure observation management. Interested in working with imagery in a modern, web-based experience? Here’s a look at some of the features ArcGIS Excalibur 1.0 has to offer: Search for Imagery ArcGIS Excalibur makes it easy to search and discover imagery available to you within your organization through a number of experiences. You can connect directly to an imagery layer, an image service URL, or even through the imagery catalog search. The imagery catalog search allows you to quickly search for imagery layers over areas of interest to discover and queue images for further use. Work with imagery Once you have located the imagery of interest, you can easily connect to the imagery exploitation canvas where you can utilize a wide variety of tools to begin working with your imagery. The imagery exploitation canvas allows you to view your imagery on top of a default basemap where the imagery is automatically orthorectified and aligned with the map. The exploitation canvas also enables you to simultaneously view the same image in a more focused manner as it was captured in its native perspective. Display Tools Optimizing imagery to get the most value out of each image pixel is a breeze with ArcGIS Excalibur display tools. The image display tools include image renderers, filters, the ability to change band combinations, and even apply settings like DRA and gamma. Settings to change image transparency and compression are also included. Exploitation Tools Ever need to highlight key areas of interest through mark up, labeling, and measurement? Through the mark-up tools, you can create simple graphics on top of your imagery using text and shape elements to call attention to areas of interest through outline, fill, transparency, and much more. The measurements tool allows you to measure horizontal and vertical distances, areas, and feature locations on an image. Export Tools The exploitation results saved in an image project can be easily shared using the export tools. The create presentation tool exports your current view directly to a Microsoft PowerPoint presentation, along with the metadata of the imagery. Introducing an Imagery Project ArcGIS Excalibur also introduces the concept of an imagery project to help streamline imagery workflows by leveraging the ArcGIS platform. An ArcGIS Excalibur imagery project is a dynamic way to organize resources, tools, and workflows required to complete an image-based task. An imagery project can contain geospatial reference layers and a set of tools for a focused image analysis and structured observation management workflows. Content created within imagery projects can be shared and made available to your organization to leverage in downstream analysis and shared information products.
  10. 2 points
    With Huawei basically blocked from using Google services and infrastructure, the firm has taken steps to replace Google Maps on its hardware by signing a partnership with TomTom to provide maps, navigation, and traffic data to Huawei apps. Reuters reports that Huawei is entering this partnership with TomTom as the mapping tech company is based in the Netherlands — therefore side-stepping the bans on working with US firms. TomTom will provide the Chinese smartphone manufacturer with mapping, live traffic data, and software on smartphones and tablets. TomTom spokesman Remco Meerstra confirmed to Reuters that the deal had been closed some time ago but had not been made public by the company. This comes as TomTom unveiled plans to move away from making navigation hardware and will focus more heavily on offering software services — making this a substantial step for TomTom and Huawei. While TomTom doesn’t quite match the global coverage and update speed of Google Maps, having a vital portion of it filled by a dedicated navigation and mapping firm is one step that might appease potential global Huawei smartphone buyers. There is no denying the importance of Google app access outside of China but solid replacements could potentially make a huge difference — even more so if they are recognizable by Western audiences. It’s unclear when we may see TomTom pre-installed on Huawei devices but we are sure that this could be easily added by way of an OTA software update. The bigger question remains if people are prepared to switch from Google Maps to TomTom for daily navigation. resource: https://9to5google.com/2020/01/20/huawei-tomtom/
  11. 2 points
    January 3, 2020 - Recent Landsat 8 Safehold Update On December 19, 2019 at approximately 12:23 UTC, Landsat 8 experienced a spacecraft constraint which triggered entry into a Safehold. The Landsat 8 Flight Operations Team recovered the satellite from the event on December 20, 2019 (DOY 354). The spacecraft resumed nominal on-orbit operations and ground station processing on December 22, 2019 (DOY 356). Data acquired between December 22, 2019 (DOY 356) and December 31, 2019 (DOY 365) exhibit some increased radiometric striping and minor geometric distortions (see image below) in addition to the normal Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) alignment offset apparent in Real-Time tier data. Acquisitions after December 31, 2019 (DOY 365) are consistent with pre-Safehold Real-Time tier data and are suitable for remote sensing use where applicable. All acquisitions after December 22, 2019 (DOY 356) will be reprocessed to meet typical Landsat data quality standards after the next TIRS Scene Select Mirror (SSM) calibration event, scheduled for January 11, 2020. Landsat 8 Operational Land Imager acquisition on December 22, 2019 (path 148/row 044) after the spacecraft resumed nominal on-orbit operations and ground station processing. This acquisition demonstrates increased radiometric striping and minor geometric distortions observed in all data acquired between December 22, 2019 and December 31, 2019. All acquisitions after December 22, 2019 will be reprocessed on January 11, 2020 to achieve typical Landsat data quality standards. Data not acquired during the Safehold event are listed below and displayed in purple on the map (click to enlarge). Map displaying Landsat 8 scenes not acquired from Dec 19-22, 2019 Path 207 Rows 160-161 Path 223 Rows 60-178 Path 6 Rows 22-122 Path 22 Rows 18-122 Path 38 Rows 18-122 Path 54 Rows 18-214 Path 70 Rows 18-120 Path 86 Rows 24-110 Path 102 Rows 19-122 Path 118 Rows 18-185 Path 134 Rows 18-133 Path 150 Rows 18-133 Path 166 Rows 18-222 Path 182 Rows 18-131 Path 198 Rows 18-122 Path 214 Rows 34-122 Path 230 Rows 54-179 Path 13 Rows 18-122 Path 29 Rows 20-232 Path 45 Rows 18-133 After recovering from the Safehold successfully, data acquired on December 20, 2019 (DOY 354) and from most of the day on December 21, 2019 (DOY 355) were ingested into the USGS Landsat Archive and marked as "Engineering". These data are still being assessed to determine if they will be made available for download to users through all USGS Landsat data portals. source: https://www.usgs.gov/land-resources/nli/landsat/january-3-2020-recent-landsat-8-safehold-update
  12. 2 points
    just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
  13. 2 points
    one of my favorite image hosting, , this is their announcement : Rest in Peace TinyPic
  14. 2 points
    not necessary to excel environment but : https://github.com/orbisgis/h2gis/wiki/4.2-LibreOffice
  15. 2 points
    This is an interesting topic from not quite an old webpage. I was searching for some use case of blockchain in geospatial context and found this. The contexts still challenging, but very noteworthy. What is a blockchain and how is it relevant for geospatial applications? (By Jonas Ellehauge, awesome map tools, Norway) A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Blockchain technology is already being used in some geospatial applications, as explained here. As an immutable registry for transactions of digital tokens, blockchain is suitable for geospatial applications involving data that is sensitive or a public good, autonomous devices and smart contracts. Use Cases The use cases are discussed further below. I have given a few short talks about this topic at various conferences, most recently at the international FOSS4G conference in Bonn, Germany, 2016. Public-good data Open Data is Still Centralised Data Over the past two decades, I have seen how ‘public-good’ geospatial data has generally become much easier to get hold of, having originally been very inaccessible to most people. Gradually, the software to display and process the data became cheaper or even free, but the data itself – data that people had already paid for through their taxes – remained inaccessible. Some national mapping institutions and cadastres began distributing the data via the internet, although mostly with a price tag. Only in recent years have a few countries in Europe made public map data freely accessible. In the meantime, projects like OpenStreetMap have emerged in order to meet people’s need for open data. It is hardly a surprise, then, that a myriad of new apps, mock-ups and business cases emerge in a region shortly after data is made available to the public there. Truly Public Open Data One of the reasons that this data has remained inaccessible for so long is that it is collected and distributed through a centralised organisation. A small group of people manage enormous repositories of geospatial data and can restrict or grant access to it. As I see it, this is where blockchain and related technologies like IPFS can enable people to build systems where the data is inherently public, no one controls it, anyone can access it, and anyone can review the full history of contributions to the data. Would it be free of charge to use data from such a system? Who would pay for it? I guess time will tell which business model is the most sustainable in that respect. OpenStreetMap is free to use, it is immensely popular and yet people gladly contribute to it – so who pays the cost for OSM? Bear in mind that there’s no such thing as ‘free data’. For example, the ‘free’ open data in Denmark today is paid for through taxes. So, even if it would cost a little to use the blockchain-based data, that wouldn’t be so different from now – just that no one would be able to restrict access to the data, plus the open nature of competing nodes and contributors will minimise the costs. Autonomous Devices & Apps Uber and Airbnb are examples of consumer applications that rely on geospatial data and processing. They represent a centralised approach where the middleman owns and controls the data and charges a significant fee for connecting clients and providers with each other. If such apps were replaced by distributed peer-to-peer systems, they could be cheaper and give their users full control of their data. There is already such an alternative to Uber called Arcade.City. A peer-to-peer market app like OpenBazar may also benefit from geospatial components with regards to e.g. search and logistics. Such autonomous apps may currently have to rely on third parties for their geospatial components – e.g. Google Maps, Mapbox, OpenStreetMap, etc. With access to truly publicly distributed data as described above, such apps would be even more reliable and cheaper to run. An autonomous device such as a drone or a self-driving car inherently runs an autonomous application, so these two concepts are heavily intertwined. There’s no doubt that self-navigating cars and drones will be a growing market in the near future. Uber and Tesla have big ambitions regarding cars, drones are being designed for delivery of consumer products (Amazon), and drone-based emergency response (drone defibrillator) and imaging (automatic selfie drone ‘Lily’) applications are emerging. Again, distributed peer-to-peer apps could cut out the middleman and reliance on third parties for their navigation and other geospatial components. Land Ownership What is Property? After some years in the GIS software industry, I realised that a very large part of my work revolved around cadastres/parcels and other administrative borders plus technical base maps featuring roads, buildings, etc. In view of my background in physical geography I thought that was pretty boring stuff and I dreamt about creating maps and applications that involved temperatures, wind, currents, salinity, terrain models, etc., because it felt more ‘real’. I gradually realised that something about administrative data was nagging me – as if it didn’t actually represent reality. Lately, I have taken an interest in philosophy about human interaction, voluntary association and self-ownership. It turns out that property is a moral, philosophical concept of assets acquired through voluntary transactions or homesteading. This perspective stretches at least as far back as John Locke in the 17th century. Such justly acquired property is reality, whereas law, governance services and computer code are systems that attempt to model reality. When such systems don’t fit reality, the system is wrong and should be dismissed, possibly adjusted or replaced. Land Ownership For the vast majority of people in many developing countries, there is no mapping of parcels or proof of ownership available to the actual landowners. Christiaan Lemmen, an expert on cadastres, has experience from field work to map parcels in developing countries such as Nigeria, Liberia, etc., where corruption can be a big challenge within land administration. In his experience, however, people mostly agree on who owns what in their local communities. These people often have a need for proof of identity and proof of ownership for their justly acquired land in order to generate wealth, invest in their future and prevent fraud – while they often face problems with inefficient, expensive or corrupt government services. Ideally, we could build inexpensive, reliable and easy-to-use blockchain-based systems that will enable people to map and register their land together with their neighbours – without involving any government officials, lawyers or other middlemen. Geodesic Grids It has been suggested to use geodesic grids of discrete cells to register land ownership on a blockchain. Such cells can be shaped, e.g. as squares, triangles, pentagons, hexagons, etc., and each cell has a unique identifier. In a traditional cadastral system, parcels are represented with flexible polygons, which allows users to register any possible shape of a parcel. Although a grid of discrete cells doesn’t allow such flexible polygons, it has an advantage in this case: each digital token on the blockchain (let’s call it a ‘Landcoin’) can represent one unique cell in the grid. Hence, whoever owns a particular Landcoin owns the corresponding piece of land. Owning such a Landcoin means possessing the private encryption key that controls it – which is how other cryptocurrencies work. In order to represent complex and high-resolution geometries, it is preferable to use a grid which is infinitely sub-divisible so that ever-smaller triangles, hexagons or squares, etc., can be tied together to represent any piece of land. A digital token can also be infinitely sub-divisible. For comparison, the smallest unit of a Bitcoin is currently a 100-millionth – aka a ‘Satoshi’. If needed, the core software could be upgraded to support even smaller units. What is a Blockchain? A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Safer – because no one controls all the data (known as root privilege in existing databases). Each entry has its own pair of public and private encryption keys and only the holder of the private key can unlock the entry and transfer it to someone else. Immutable – because each block of entries (added every 1-10 minutes) carries a unique hash ‘fingerprint’ of the previous block. Hence, older blocks cannot be tampered with. Cheaper – because anyone can set up a node and get paid in digital tokens (e.g. Bitcoin or Ether) for hosting a blockchain. This ensures that competition between nodes will minimise the cost of hosting it. It also saves the costs of massive security layers that otherwise apply to servers with sensitive data – this is because of the no-root-privilege security model and, with old entries being immutable, there’s little need to protect them. Resilient – because there is no single point of failure, there’s practically nothing to attack. In order to compromise a blockchain, you’d have to hack each individual user one by one in order to get hold of their private encryption keys that give access to that user’s data only. Another option is to run over 50% of the nodes, which is virtually impossible and economically impractical. Transparency and accountability – the fact that existing entries cannot be tampered with makes a blockchain a transparent source of truth and history for your application. The public nature of it makes it easy to hold people accountable for their activities. Control – the immutable and no-root-privilege character puts each user in full control of his/her own data using the private encryption keys. This leads to real peer-to-peer interaction without any middleman and without an administrator that can deny users access to their data. Trustless – because each user fully controls his/her own data, users can safely interact without knowing or trusting each other and without any trusted third parties. Smart Contracts and DAPPs A blockchain can be more than a passive registry of entries or transactions. The original Bitcoin blockchain supports limited scripting allowing for programmable transactions and smart contracts – e.g. where specified criteria must be fulfilled leading to transactions automatically taking place. Possibly the most popular alternative to Bitcoin is Ethereum, which is a multi-purpose blockchain with a so-called ‘Turing complete’ programming interface, which allows developers to create virtually any imaginable application on this platform. Such applications are referred to as decentralised autonomous applications (DAPPs) and are virtually impossible for third parties to stop or censor. [1] IFPS IPFS is a distributed file system and web protocol, which can complement or even replace HTTP. Instead of referring to files by their location on a host or IP address, it refers to files by their content. This means that when requested, IPFS will return the content from the nearest possible or even multiple computers rather than from a central server. That could be on the computer next to you, on your local network or somewhere in the neighbourhood. Jonas Ellehauge is an expert on geospatial software, GIS and web development, enthusiastic about open source, Linux and UI/UX. Ellehauge is passionate about science, philosophy, entrepreneurship, economy and communication. His background in physical geography provides extensive knowledge of spatial analyses and spatial problem solving.
  16. 2 points
    multifunction casing. you can run 3d games and grating cheese for your hamburger. excelent thought apple as always LOL
  17. 2 points
    We are all already familiar with GPS navigation outdoors and what wonders it does not only for our everyday life, but also for business operations. Outdoor maps, allowing for navigation via car or by foot, have long helped mankind to find even the most remote and hidden places. Increased levels of efficiency, unprecedented levels of control over operational processes, route planning, monitoring of deliveries, safety and security regulations and much more have been made possible. Some places are, however, harder to reach and navigate than others. For instance, places like big indoor areas – universities, hospitals, airports, convention centers or factories, among others. Luckily, that struggle is about to become a thing of the past. So what’s the solution for navigating through and managing complex indoor buildings? Indoor Mapping and Visualization with ArcGIS Indoors The answer is simple – indoor mapping. Indoor mapping is a revolutionary concept that visualizes an indoor venue and spatial data on a digital 2D or 3D map. Showing places, people and assets on a digital map enables solutions such as indoor positioning and navigation. These, in turn, allow for many different use cases that help companies optimize their workflows and efficiencies. Mobile Navigation and Data The idea behind this solution is the same as outdoor navigation, only instead it allows you to see routes and locate objects and people in a closed environment. As GPS signals are not available indoors, different technology solutions based on either iBeacons, WiFi or lighting are used to create indoor maps and enable positioning services. You can plan a route indoors from point A to point B with customized pins and remarks, analyze whether facilities are being used to their full potential, discover new business opportunities, evaluate user behaviors and send them real-time targeted messages based on their location, intelligently park vehicles, and the list goes on! With the help of geolocation, indoor mapping stores and provides versatile real-time data on everything that is happening indoors, including placements and conditions of assets and human movements. This allows for a common operating picture, where all stakeholders share the same level of information and insights into internal processes. Having a centralized mapping system enables effortless navigation through all the assets and keeps facility managers updated on the latest changes, which ultimately improves business efficiency. Just think how many operational insights can be received through visualizations of assets on your customized map – you can monitor and analyze the whole infrastructure and optimize the performance accordingly. How to engage your users/visitors at the right time and place? What does it take to improve security management? Are the workflow processes moving seamlessly? Answers to those and many other questions can be found in an indoor mapping solution. Interactive indoor experiences are no longer a thing of the future, they are here and now. source: https://www.esri.com/arcgis-blog/products/arcgis-indoors/mapping/what-is-indoor-mapping/
  18. 2 points
    Hi evrybody I'm an italian architect, dealing few times with GIS related topics. I also like to draw my own seamaps for my Chartplotter.
  19. 2 points
    SarVision was created in 2000, as a spin-off from Wageningen University (WUR) in the Netherlands. SarVision pioneers the operational application of systematic satellite monitoring and mapping systems for environmental and natural resource management. Our innovative systems provide our partners with the latest maps and information on agriculture and land use, forest cover change, fire and hydrology. Our inhouse cutting edge radar technology, which « sees » through clouds, smoke and haze, enables continuous land surface monitoring, updating data on a continuous basis (bi-weekly to yearly). SarVision contributes to numerous sustainable development efforts in tropical regions around the globe, working directly with organisations as diverse as space agencies, multilateral institutions, government agencies, local community associations, farmers, agribusiness, logging and plantation companies, nature conservation organisations, oil and gas companies, universities and insurance companies. Job description We are looking for a remote sensing expert to join our team. Together with SarVision experts, you will contribute to the development and implementation of operational services in the areas of agriculture, water, forest and land use mapping and monitoring. You will have the opportunity to apply and further develop your skills in: • The processing of satellite images: pre-processing tasks, image classification using in-house and external software packages; • GIS: quality control and validation, data analysis and presentation, integration of multiple data sources; • IT and programming: automation of processing tasks and processing chains from data acquisition to delivery of final product. You will mainly work in a team with SarVision remote sensing experts, but also carry out operational tasks autonomously. Requirements • A Bachelor or Master’s Degree with main focus on Remote Sensing, Geoinformatics, Geography, Agriculture, Forestry or related area of expertise; • Professional experience in a remote sensing company would be beneficial; • Ability to work in complex, multi-task team situation; • Willingness and ability to learn new skills quickly; • Ability to work under time pressure and respect deadlines, keeping track of long term objectives; • Ability to travel occasionally to developing countries; • Very good English language skills, Dutch and/or Spanish advantageous. Technical skills: • Remote sensing background; • Experience in image processing for agriculture, forest, and land cover/land use applications; • Knowledge in statistical analyses (sampling design, accuracy assessment); • Programming skills: experience/knowledge of Python, GDAL; IDL, Matlab, R, C++, Java: advantageous • Experience with Linux and Bash: advantageous • Experience with QGIS, PostGIS: advantageous; • Radar data processing and machine learning skills: advantageous. Duration & starting date We offer a fix-term contract of 1 year, with possibility of extension. Starting date as soon as possible. How to apply? Send a CV and motivation letter in English to Wilbert van Rooij ([email protected]) before June 25th 2019. www.sarvision.nl
  20. 2 points
    Topcon Positioning Group’s Dave Henderson offers a rundown on the company’s latest products, including the Falcon 8+ drone, Sirius Pro, MR-2 modular receiver, and B210 and B125 receiver boards, at Xponential 2019. source: https://www.gpsworld.com/topcon-showcases-falcon-8-drone-sirius-pro-and-receiver-boards-at-xponential-2019/
  21. 2 points
    As part of ArcGIS Enterprise 10.7, we (ESRI) are thrilled to release a new capability that unlocks versatile data science tools and the limitless potential of Python in your Web GIS deployment. ArcGIS Notebooks provide users with a Jupyter notebook environment, hosted in your ArcGIS Enterprise portal and powered by the new ArcGIS Notebook Server. ArcGIS Notebooks are built to run big data analysis, deep learning models, and dynamic visualization tools. Notebooks are implemented using Docker containers – a virtualized operating system that provides an isolated “sandbox” style environment for each notebook author. The computational resources for each container can be configured by the organization – allowing the flexibility for notebook authors to get the computing resources they need, when they need it. Seamless integration with the portal ArcGIS Notebook Server is a new licensing role for ArcGIS Server. Because it works with the Docker container allocation technology to deliver a separate container for each notebook author, it requires specific installation steps to get up and running. Take a look at the ArcGIS Notebook Server install guide to see how it works. Once you’ve installed ArcGIS Notebook Server and configured it with your portal, you can create custom roles to grant notebook privileges to the members of your organization so that they can create and edit notebooks. Put Python to work for you At the core of the ArcGIS Notebook experience are Esri’s powerful Python resources: ArcPy and the ArcGIS API for Python. Alongside these are hundreds of popular Python libraries, such as TensorFlow, scikit-learn, and fast.ai. It all comes together to give you a complete Python workstation for spatial analysis, data science, deep learning, and content management. The Standard license of ArcGIS Notebook Server, which comes at no additional cost for ArcGIS Enterprise customers, bundles the Python API and nearly 300 other third-party Python libraries built-in. The Jupyter notebook environment has long been an essential medium for Python API users; with ArcGIS Notebooks, that environment is now available directly in the ArcGIS Enterprise portal. Turn analysis into action Location is the common thread that runs through almost any problem. What you buy, who your customers are, the impact that your business has on the natural world, and that the natural world has on your business are all problems of location. Traditional data science has many powerful tools and algorithms for solving problems. Spatial data science – GeoAI – also brings in spatial data, methods, and tools. GeoAI can help you create more effective models that more closely resemble problems you want to solve. Because of this, spatial data science models are better suited to model the impact of the solution you create. . Installation and getting started Esri Jupyter Notebook And those who wants their own free jupyter notebook # install miniconda and hit conda install -y jupyter 😁
  22. 2 points
    Klau Geomatics has released Real-Time Precise Point Positioning (PPP) for aerial mapping and drone positioning that enables 3 to 5 cm initial positioning accuracy, anywhere in the world, without any base station data or network corrections. With this, you Just need to fly your drone at any distance, anywhere. The system allows to navigate with real-time cm level positioning or geotag your mapping photos and Lidar data. You don’t need to think about setting up a base station, finding quality CORS data or setting up an RTK radio link. You don’t need to be in range of a CORS station, you can fly autonomously, in remote areas, long corridors, unlimited range, it just works, giving you centimetre level accuracy, anywhere. Now, with this latest satellite-based positioning technology, 3 to 5cm accuracy can be achieved, anywhere in the world, with no base station. KlauPPP leverages NovAtel’s industry-leading technology to achieve this quantum leap in PPP accuracy. NovAtel PPP and Klau Geomatics hardware/software system is now the simplest, most convenient and accurate positioning system for UAVs and manned aircraft. The bundled solution enables accurate positioning in any published or custom coordinate system and datum. This technology is very applicable to surveying, mapping, navigation and particularly the emerging drone inspection industry, starting to realize that absolute accuracy is essential to analyze change over time in 3D assets. A BVLOS parcel delivery drone can now travel across a country and arrive exactly on it’s landing pad. No range limitations, no base station requirements or radio links. Highly accurate autonomous flight. Large scale enterprise drone companies can deploy their fleet of operators with a simple, mechanical workflow to capture accurate, repeatable data, without the complications of the survey world; of RTK radio links and network connections or logging base station data within a range of each of their many projects. Now they have a simple consistent operation that just works, every time, every location. “Just as Klau Geomatics led the industry from RTK and GCPs to PPK, we now lead the charge to PPP as the next technology for simple, accurate drone operations”, says Rob Klau, Director of Klau Geomatics source : http://geomatics.com.au/
  23. 1 point
    Saw a similar news last month - Using Machine Learning to “Nowcast” Precipitation in High Resolution by Google. The result seemed pretty good. Here, A visualization of predictions made over the course of roughly one day. Left: The 1-hour HRRR prediction made at the top of each hour, the limit to how often HRRR provides predictions. Center: The ground truth, i.e., what we are trying to predict. Right: The predictions made by our model. Our predictions are every 2 minutes (displayed here every 15 minutes) at roughly 10 times the spatial resolution made by HRRR. Notice that we capture the general motion and general shape of the storm. The two method seem similar.
  24. 1 point
    Google announced Dataset Search, a service that lets you search for close to 25 million different publicly available data sets, is now out of beta. Dataset Search first launched in September 2018. Researchers can use these data sets, which range from pretty small ones that tell you how many cats there were in the Netherlands from 2010 to 2018 to large annotated audio and image sets, to check their hypotheses or train and test their machine learning models. The tool currently indexes about 6 million tables. With this release, Dataset Search is getting a mobile version and Google is adding a few new features to Dataset Search. The first of these is a new filter that lets you choose which type of data set you want to see (tables, images, text, etc.), which makes it easier to find the right data you’re looking for. In addition, the company has added more information about the data sets and the organizations that publish them. Searched 'remote sensing' and found this Geographic information A lot of the data in the search index comes from government agencies. In total, Google says, there are about 2 million U.S. government data sets in the index right now. But you’ll also regularly find Google’s own Kaggle show up, as well as a number of other public and private organizations that make public data available, as well. As Google notes, anybody who owns an interesting data set can make it available to be indexed by using a standard schema.org markup to describe the data in more detail. Source
  25. 1 point
    update: this how i do till i post this update 1. disable hardware acceleration 2. Turn off all compatibility settings except for Run This Program As Administrator (point 4 on above article) 3. Stop the Windows Presentation Foundation Font Cache (point 5 on above article). I saw little increase on performance, the workspace almost playable now, but i still find the lag when scroll it.
  26. 1 point
  27. 1 point
    While many advancements have been made this last decade in automated classification of above surface features using remote sensing data, progress for detecting underground features has lagged in this area. Technologies for detecting features, including ground penetrating radar, electrical resistivity, and magnetometry exist, but methods for feature extraction and identification mostly depend on the experience of instrument user. One problem has been creating approaches that can deal with complex signals. Ground penetrating radar (GPR), for instance, often produces ambiguous signals that can have a lot different noise interference relative to the feature one wants to identify. One approach has been to apply approximation polynomials to classify given signals that are then inputs for an applied neural networks model using derived coefficients. This technique can help reduce noise and differentiate signals that follow clear patterns that vary from larger background signals. Differentiation of signals based on minimized coefficients are one way to simplify and better differentiate data signals.[1] Another approach is to use multilayer perceptron that has a nonlinear activation function which transforms the data. This is effectively a similar technique but uses different transform functions than other neural network models. Applications of this approach include being able to differentiate thickness of underground structures from surrounding sediments and soil.[2] Other methods have been developed to determine the best location to place source and receivers that can capture relevant data. In seismic research, the use of convolutional neural networks (CNNs) has been applied to determine better positioning of sensors so that better data quality can be achieved. This has resulted in very high precision and recall rates at over 0.99. Using a series of filtered layers, signals can be assessed for their data quality with that of manually placed instruments. The quality of the placement can also be compared to other locations to see if the overall signal capture improves. Thus, rather than focusing on mainly signal processing, this method also focuses on signal placement and capture that compares to other placements to optimize data capture locations.[3] One problem in geophysical data is inversion, where data points are interpreted to be the opposite of what they are due to a reflective signal that may hid the nature of the true data. Techniques using CNNs have also been developed whereby the patterning of data signals around a given inversion can be filtered and assessed using activation functions. Multiple layers that transform and reduce data to specific signals helps to identify where patterns of data suggest an inversion is likely, while checking if this follows patterns from other data using Bayesian learning techniques.[4] source: https://www.gislounge.com/automated-remote-sensing-of-underground-features/
  28. 1 point
    Interesting articles : North-South displacement field - 1999 Hector-Mine earthquake, California In complement to seismological records, the knowledge of the ruptured fault geometry and co-seismic ground displacements are key data to investigate the mechanics of seismic rupture. This information can be retrieved from sub-pixel correlation of optical images. We are investigating the use of SPOT (Satellite pour l'Observation de la Terre) satellites images. The technique developed here is attractive due to the operational status of a number of optical imaging programs and the availability of archived data. However, uncertainties on the imaging system itself and on its attitude dramatically limit its potential. We overcome these limitations by applying an iterative corrective process allowing for precise image registration that takes advantage of the availability of accurate Digital Elevation Models with global coverage (SRTM). This technique is thus a valuable complement to SAR interferometry which provides accurate measurements kilometers away from the fault but generally fails in the near-fault zone where the fringes get noisy and saturated. Comparison between the two methods is briefly discussed, with application on the 1992 Landers earthquake in California (Mw 7.3). Applications of this newly developped technique are presented: the horizontal co-seismic displacement fields induced by the 1999 Hector-Mine earthquake in California (Mw 7.1) and by the 1999 Chichi earthquake in Taiwan (Mw 7.5) have recently been retrieved using archive images. Data obtained can be downloaded (see further down) Latest Study Cases Sub-pixel correlation of optical images Following is the flow chart of the technique that as been developped. It allows for precise orthorectification and coregistration of the SPOT images. More details about the optimization process will be given in the next sections. Understanding the disparities measured from Optical Images Differences in geometry between the two images to be registered: - Uncertainties on attitudes parameters (roll, pitch, yaw) - Inaccuracy on orbital parameters (position, velocity) - Incidence angle differences + topography uncertainties (parallax effect) - Optical and Electronic biases (optical aberrations, CCD misalignment, focal length, sampling period, etc… ) » May account for disparities up to 800 m on SPOT 1,2,3,4 images; 50m for SPOT 5 (see [3]). Ground deformations: - Earthquakes, land slides, etc… » Typically subpixel scale: ranging from 0 to 10 meters. Temporal decorrelation: - Changes in vegetation, rivers, changes in urban areas, etc… » Correlation is lost: add noise to the measurement – up to 1m. » Ground deformations are largely dominated by the geometrical artifacts. Precise registration: geometrical corrections SPOT (Systeme pour l'Observation de la Terre) satellites are pushbroom imaging systems ([1],[2]): all optical parts remain fixed during acquisition and the scanning is accomplished by the forward motion of the spacecraft. Each line in the image is then acquired at a different time and submitted to the different variations of the platform. The orthorectification process consists in modeling and correcting these variations to produce cartographic distortion free images. It is then possible to accurately register images and look for their disparities using correlation techniques. Attitude variations (roll, pitch, and yaw) during the scanning process have to be integrated in the image model (see [1],[2]). Errors in correcting the satellite look directions will result in projecting the image pixels at the wrong location on the ground: important parallax artifacts will be seen when measuring displacement between two images. Exact pixel projection on the ground is achieved through an optimization algorithm that iteratively corrects the look directions by selecting ground control points. An accurate topography model has to be used. What parameters to optimize? - Initial attitudes values of the platform (roll, pitch, yaw), - Constant drift of the attitude values along the image acquisition, - Focal length (different value depending on the instrument , HRG1 – HRG2), - Position and velocity. How to optimize: Iterative algorithm using a set of GCPs (Ground Control Points). GCPs are generated automatically with a subpixel accuracy: they result from a correlation between an orthorectified reference frame and the rectified image whose parameters are to be optimized. A two stages procedure: - One of the image is optimized with respect to the shaded DEM (GCP are generated from the correlation with the shaded DEM). The DEM is then considered as the ground truth. No GPS points are needed. - The other image is then optimized using another set of GCP resulting from the correlation with the first image (co-registration). Measuring co-seismic deformation with InSAR, a comparison A fringe represents a near-vertical displacement of 2.8 cm SAR interferogram (ERS): near-vertical component of the ground displacement induced by the 1992 Landers earthquake [Massonnet et al., 1993]. No organized fringes in a band within 5-10 km of the fault trace: displacement sufficiently large that the change in range across a radar pixel exceeds one fringe per pixel, coherence is lost. http://earth.esa.int/applications/data_util/ndis/equake/land2.htm » SAR interferometry is not a suitable technique to measure near fault displacements The 1992 Landers earthquake revisited: Profile in offsets and elastic modeling show good agreement From: [6] - Measuring earthqakes from optical satellite images, Van Puymbroeck, Michel, Binet, Avouac, Taboury - Applied Optics Vol. 39, No 20, 10 July 2000 Other applications of the technique, see [4], [5]. » Fault ruptures can be imaged from this technique Applying the precise rectification algorithm + subpixel correlation: The 1999 Hector-Mine earthquake (Mw 7.1, California) Obtaining the Data (available in ENVI file Format. Load banbs as gray scale images. Bands are: N/S offsets, E/W offsets, SNR): Raw and filtered results: HectorMine.zip Pre-earthquake image: SPOT 4, acquisition date: 08-17-1998 Ground resolution: 10m Post-earthquake image: SPOT 2, acquisition date: 08-18-2000 Ground resolution: 10m Offsets measured from correlation: Correspond to sub-pixel offsets in the raw images. Correlation windows: 32 x 32 pixels 96m between two measurements So far we have: - A precise mapping of the rupture zone: the offsets field have a resolution of 96 m, - Measurements with a subpixel accuracy (displacement of at most 10 meters), - Improved the global georeferencing of the images with no GPS measurements, - Improved the processing time since the GCP selection is automatic, - Suppressed the main attitude artifacts. The profiles do not show any long wavelength deformations (See Dominguez et al. 2003) We notice: - Linear artifacts in the along track direction due to CCD misalignments, Schematic of a DIVOLI showing four CCD linear arrays. - Some topographic artifacts: the image resolution is higher than the DEM one, - Several decorrelations due to rivers and clouds, - High frequency noise due to the noise sensitivity of the Fourier correlator (See Van Puymbroeck et al.). Conclusion Subpixel correlation technique has been improved to overcome most of its limitations: » Precise rectification and co-registration of the images, » No more topographic effects (depending on the DEM resolution), » No need for GPS points – independent and automatic algorithm, » Better spatial resolution (See Van Puymbroeck et al.) To be improved: » Stripes due to the CCD’s misalignment, » high frequency noise from the correlator, » Process images with corrupted telemetry. » The subpixel correlation technique appears to be a valuable complement to SAR interferometry for ground deformation measurements. References: [1] SPOT 5 geometry handbook: ftp://ftp.spot.com/outgoing/SPOT_docs/geometry_handbook/S-NT-73-12-SI.pdf [2] SPOT User's Handbook Volume 1 - Reference Manual: ftp://ftp.spot.com/outgoing/SPOT_docs/SPOT_User's Handbook/SUHV1RM.PDF [3] SPOT 5 Technical Summary ftp://ftp.spot.com/outgoing/SPOT_docs/technical/spot5_tech_slides.ppt [4] Dominguez S., J.P. Avouac, R. Michel Horizontal co-seismic deformation of the 1999 Chi-Chi earthquake measured from SPOT satellite images: implications for the seismic cycle along the western foothills of Central Taiwan, J. Geophys. Res., 107, 10 1029/2001JB00482, 2003. [5] Michel, R. et J.P., Avouac, Deformation due to the 17 August Izmit earthquake measured from SPOT images, J. Geophys. Res., 107, 10 1029/2000JB000102, 2002. [6] Van Puymbroeck, N., Michel, R., Binet, R., Avouac, J.P. and Taboury, J. Measuring earthquakes from optical satellite images, Applied Optics Information Processing, 39, 23, 3486–3494, 2000. Publications: Leprince S., Barbot S., Ayoub F., Avouac, J.P. Automatic, Precise, Ortho-rectification and Co-registration for Satellite Image Correlation, Application to Seismotectonics. To be submitted. Conferences: F Levy, Y Hsu, M Simons, S Leprince, J Avouac. Distribution of coseismic slip for the 1999 ChiChi Taiwan earthquake: New data and implications of varying 3D fault geometry. AGU 2005 Fall meeting, San Francisco. M Taylor, S Leprince, J Avouac. A Study of the 2002 Denali Co-seismic Displacement Using SPOT Horizontal Offsets, Field Measurements, and Aerial Photographs. AGU 2005 Fall meeting, San Francisco. Y Kuo, F Ayoub, J Avouac, S Leprince, Y Chen, J H Shyu, Y Kuo. Co-seismic Horizontal Ground Slips of 1999 Chi-Chi Earthquake (Mw 7.6) Deduced From Image-Comparison of Satellite SPOT and Aerial Photos. AGU 2005 Fall meeting, San Francisco. source: http://www.tectonics.caltech.edu/geq/spot_coseis/
  29. 1 point
    please elaborate, what do you plan on using remote sensing data to make some animation? please add the details simple example for their functions can be see here: http://animove.org/wp-content/uploads/2019/04/Daniel_Palacios_animate_moveVis.html
  30. 1 point
    Google says it has built a computer that is capable of solving problems that classical computers practically cannot. According to a report published in the scientific journal Nature, Google's processor, Sycamore, performed a truly random-number generation in 200 seconds. That same task would take about 10,000 years for a state-of-the-art supercomputer to execute. The achievement marks a major breakthrough in the technology world's decadeslong quest to use quantum mechanics to solve computational problems. Google CEO Sundar Pichai wrote that the company started exploring the possibility of quantum computing in 2006. In classical computers, bits can store information as either a 0 or a 1 in binary notation. Quantum computers use quantum bits, or qubits, which can be both 0 and 1. According to Google, the Sycamore processor uses 53 qubits, which allows for a drastic increase in speed compared with classical computers. The report acknowledges that the processor's practical applications are limited. Google says Sycamore can generate truly random numbers without utilizing pseudo-random formulas that classical computers use. Pichai called the success of Sycamore the "hello world" moment of quantum computing. "With this breakthrough we're now one step closer to applying quantum computing to—for example—design more efficient batteries, create fertilizer using less energy, and figure out what molecules might make effective medicines," Pichai wrote. IBM has pushed back, saying Google hasn't achieved supremacy because "ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity." On its blog, IBM further discusses its objections to the term "quantum supremacy." The authors write that the term is widely misinterpreted. "First because, as we argue above, by its strictest definition the goal has not been met," IBM's blog says. "But more fundamentally, because quantum computers will never reign 'supreme' over classical computers, but will rather work in concert with them, since each have their unique strengths." News of Google's breakthrough has raised concerns among some people, such as presidential hopeful Andrew Yang, who believe quantum computing will render password encryption useless. Theoretical computer science professor Scott Aaronson refuted these claims on his blog, writing that the technology needed to break cryptosystems does not exist yet. The concept of quantum computers holding an advantage over classical computers has dated back to the early 1980s. In 2012, John Preskill, a professor of theoretical physics at Caltech, coined the term "quantum supremacy." source: https://www.npr.org/2019/10/23/772710977/google-claims-to-achieve-quantum-supremacy-ibm-pushes-back
  31. 1 point
    Look forward to play mario and legend of zelda on those device 😁
  32. 1 point
    SELECT *, st_buffer(geom,50) as geom INTO gis_osm_pois_buf FROM gis_osm_pois; add * in your query to include all your fields. As your new geom field is came from the st_buffer named it a geom/geometry/the_geom or as you prefer.
  33. 1 point
    I am enthusiastic how I can create extension same as GIS.XL for LibreOffice instead of Microsoft Excel!? I am going to know where should I start? http://www.gisxl.com/Features The GIS.XL add-in provides features and functions for work with spatial data directly inside the Excel environment. Add-in includes a standard interface, familiar from other GIS programs - Map and Legend. Combine Excel (tabular data) and spatial (map) data in layers.
  34. 1 point
    if they do pull it off then they would be the third Titan in the mobile market, something Microsoft mobile failed miserably
  35. 1 point
    This year at WWDC 2019, Apple unveiled a cheese grater and called it the new Mac Pro. But, to see the 2019 Mac Pro once is enough to remember it for a long time. Specs - According to Apple website, you can spend as much as $35,000+ in it !! 🤯😬 Source
  36. 1 point
    Hello everyone, I'm Halid, from Bosnia and Herzegovina, geodetic engineer. I don't have much experience in GIS area, but I'll give my best to contribute, and hope I'll get info that I need also :))
  37. 1 point
    The U.S. Air Force’s second new GPS III satellite, bringing higher-power, more accurate and harder-to-jam signals to the GPS constellation, has arrived in Florida for launch. On March 18, Lockheed Martin shipped the Air Force’s second GPS III space vehicle (GPS III SV02) to Cape Canaveral for an expected July launch. Designed and built at Lockheed Martin’s GPS III Processing Facility near Denver, the satellite traveled from Buckley Air Force Base, Colorado, to the Cape on a massive Air Force C-17 aircraft. The Air Force nicknamed the GPS III SV02 “Magellan” after Portuguese explorer Ferdinand Magellan. GPS III is the most powerful and resilient GPS satellite ever put on orbit. Developed with an entirely new design, for U.S. and allied forces, it will have three times greater accuracy and up to eight times improved anti-jamming capabilities over the previous GPS II satellite design block, which makes up today’s GPS constellation. GPS III also will be the first GPS satellite to broadcast the new L1C civil signal. Shared by other international global navigation satellite systems, like Galileo, the L1C signal will improve future connectivity worldwide for commercial and civilian users. The Air Force began modernizing the GPS constellation with new technology and capabilities with the December 23, 2018 launch of its first GPS III satellite. GPS III SV01 is now receiving and responding to commands from Lockheed Martin’s Launch and Checkout Center at the company’s Denver facility. Lockheed Martin shipped the U.S. Air Force’s first GPS III to Cape Canaveral, Florida ahead of its expected July launch. (Photo: Lockheed Martin} “After orbit raising and antenna deployments, we switched on GPS III SV01’s powerful signal-generating navigation payload and on Jan. 8 began broadcasting signals,” Johnathon Caldwell, Lockheed Martin’s Vice President for Navigation Systems. “Our on orbit testing continues, but the navigation payload’s capabilities have exceeded expectations and the satellite is operating completely healthy.” GPS III SV02 is the second of ten new GPS III satellites under contract and in full production at Lockheed Martin. GPS III SV03-08 are now in various stages of assembly and test. The Air Force declared the second GPS III “Available for Launch” in August and, in November, called GPS III SV02 up for its 2019 launch. In September 2018, the Air Force selected Lockheed Martin for the GPS III Follow On (GPS IIIF) program, an estimated $7.2 billion opportunity to build up to 22 additional GPS IIIF satellites with additional capabilities. GPS IIIF builds off Lockheed Martin’s existing modular GPS III, which was designed to evolve with new technology and changing mission needs. On September 26, the Air Force awarded Lockheed Martin a $1.4 billion contract for support to start up the program and to contract the 11th and 12th GPS III satellite. Once declared operational, GPS III SV01 and SV02 are expected to take their place in today’s 31 satellite strong GPS constellation, which provides positioning, navigation and timing services to more than four billion civil, commercial and military users. source: https://www.satellitetoday.com/launch/2019/03/26/lockheed-martin-ships-second-gps-iii-satellite/
  38. 1 point
    TAU-0707 series GNSS module. (Photo: Allystar) Allystar Technology Co. Ltd. has launched its smallest multi-band multi-GNSS module, the TAU-0707 series. Within its 7.6 x 7.6 millimeter size, the TAU-0707 series module supports major GNSS constellations (GPS / Galileo / GLONASS / BeiDou / QZSS / IRNSS) and all civil bands (L1, L2, L5, L6). As the latest addition to Allystar’s GNSS portfolio, the TAU-0707 series module is a concurrent multi-band multi-GNSS receiver embedded with a cynosure III single-die standalone positioning chipset, which offers multi-frequency measurements to improve positioning accuracy and simplifies integration for third-party applications, said Shi Xian Yang, Allystar marketing manager. Moreover, Allystar also provides the built-in low-noise amplifier in the TAU-1010 series module, which offers the module with improved RF sensitivity and exceptional acquisition and tracking performance even in weak signal areas. With more and more satellites supporting L1/L5 signals, Allystar offers two modules to fully support all civil signals on the L5 band for the standalone market. The TAU1206-0707 and TAU1205-1010 are expected to be better in multipath mitigation mainly due to the higher chipping rate of L5 signals relative to L1 C/A code. L1/L5 band module for standalone market. For professional applications, module TAU1303-0707 comes with built-in support for standard RTCM protocol (MSM), supporting multi-band multi-system high-precision raw data output, including pseudorange, phase range, Doppler, SNR for any kind of third-party integration and application. Module with Raw data output for professional market. Allystar TAU series module offers superior accuracy thanks to the onboard 26-MHz temperature compensated crystal oscillator and a reduced time to first fix relying on its dedicated 32-KHz real-time clock oscillator. Based on 40-nm manufacturing processes of the Cynosure III GNSS chipset, it comes with very low power consumption at less than 40 mA. According to the company, engineering samples and a reference design of the Allystar TAU-0707 and TAU-1010 series module will be available in April. source: http://www.allystar.com/en/index.php?g=&amp;m=news&amp;a=newsinfo&amp;id=32
  39. 1 point
    realy sad news WorldWind team would like to inform you that starting April 5, 2019, NASA WorldWind project will be suspended. All the WorldWind servers providing elevation and imagery will be unavailable. While you can still download the SDKs from GitHub, there will be no technical support. If you have questions and/or concerns, please feel free to email at: [email protected] Update on March 21, 2019 - Answers to common questions about the suspension are available in the NASA WorldWind Project Suspension FAQ. source : https://worldwind.arc.nasa.gov/news/2019-03-08-suspension-notice/
  40. 1 point
    You may find such NDVI data via satellite imagery service called LandViewer. This tool has a vast database of satellite imagery that is publicly available and is updated on a regular basis. You may set any Index you need to analyze the area of your needs or create any Index of your own. Besides that there are already ready-made tools for obtaining multispectral indices, flexible processing of data on AOI, elementary clustering, using a raster calculator, visualization of scenes in 3D using digital elevation models, changes in territories based on multi-temporal multispectral analysis, as well as creating ready-made animations of changes in terrain and so much more. Here’s a brief guide to types satellite data that can be found on LandViewer. High resolution satellite imagery: SPOT 6, 7 (up to 1.5 m/pxl) SPOT 5 (up to 2.5 m/pxl) Pléiades 1A, 1B (up to 0.5 m/pxl) KOMPSAT-2 (up to 1 m/pxl) KOMPSAT-3А (up to 0.4 m/pxl) KOMPSAT-3 (up to 0.5 m/pxl) SuperView-1 (up to 0.5 m/pxl) Both optical and radar data is available — with global coverage, and short revisiting period that varies from 2 to 5 days. Low & medium resolution imagery: Landsat 4 - archive 1982-1993 Landsat 5 - archive 1984-2013 Landsat 7 - archive since 1999 MODIS - archive since 2012 Landsat 8 - archive since 2013 Sentinel-1 - archive since 2014 Sentinel-2 - archive since 2015 An example of such imagery can be seen below: https://eos.com/landviewer/?lat=33.39447&amp;lng=52.68974&amp;z=11&amp;side=R&amp;slider-id=LV-TEM4-MTYz-MDM3-MjAx-MzM2-NExH-TjAw&amp;slider-b=Red,Green,Blue&amp;slider-anti&amp;slider-pansharpening&amp;id=LV-TEM4-MTYz-MDM3-MjAx-MzM2-NExH-TjAw&amp;b=NIR,Red&amp;expression=(B5-B4)%2F(B5%2BB4)&amp;anti&amp;pansharpening
  41. 1 point
    cool, but costly. i would better recommend https://apps.sentinel-hub.com/eo-browser/?
  42. 1 point
    Esri, the global leader in location intelligence, today announced the acquisition of indoo.rs GmbH, a world-leading provider of Indoor Positioning System (IPS) technology and Esri partner. The indoo.rs software will become part of Esri’s ArcGIS Indoors, a new mapping product that enables interactive indoor mapping of corporate facilities, retail and commercial locations, airports, hospitals, event venues, universities, and more. The acquisition will also provide users of Esri’s ArcGIS platform with imbedded IPS location services to support indoor mapping and analysis. The indoo.rs headquarters will also serve as a new Esri R&D center based in Vienna, Austria focused on cutting-edge IPS capability. The capability to accurately map, manage, navigate, and plan indoor spaces is a rapidly emerging market that promises to decrease costs, increase safety, and provide users of indoor spaces with a better workplace experience. ArcGIS Indoors does this by providing floor-aware 3D maps and focused apps to support a variety of workplace and facility users, including owner/operators, maintenance and service personnel, security staff, employees, and visitors. “indoo.rs is a leading provider of IPS software and services, working with organizations across the globe such as international hub airports, major rail stations, and corporate headquarters, and I am excited to welcome the company to the Esri family,” said Brian Cross, Esri director of professional services. “indoo.rs’ technology, experience, and leadership in the IPS field will be of tremendous benefit to our customers who want to bring the power of GIS to indoor spaces.” The new Vienna-based Esri R&D center will also provide support for IPS within ArcGIS Indoors and across the ArcGIS line of products. Existing indoo.rs customers will now have access to ArcGIS, adding the most powerful GIS software to their indoor mapping uses. “Becoming an integral part of Esri’s product catalog allows us to continue the provision of our services at the highest professional level,” said Bernd Gruber, founder of indoo.rs. “It also fosters new and exciting future developments as well as securing our leading-edge approach.” “We have seen the IPS market skyrocketing over the last few years,” said Rainer Wolfsberger, CEO of indoo.rs, “and our enterprise customers showed a high demand for deep integration of IPS technology to release the benefits of such a solution at all levels of their organization.” The initial release of ArcGIS Indoors will include the acquired indoo.rs IPS capability to enable ArcGIS Indoors mobile apps to work with iBeacon-based IPS systems, which provides “blue dot” accuracy on mobile devices. ArcGIS Indoors supports other IPS formats such as Apple’s indoor position service, and will add support for other IPS providers in coming releases. source: https://www.geospatialworld.net/news/esri-acquires-indoo-rs-and-announces-arcgis-indoors-release/
  43. 1 point
    Taking picture of a stranger has become easier though (oh I was just using the map, lady !!). 😉
  44. 1 point
    Hi Darksabersan, the links are dead, could you reactivate and perhaps upload to a different site like Mega.nz, please? thanks
  45. 1 point
    Details geological-geophysical aspects of groundwater treatment Discusses regulatory legislations regarding groundwater utilization Serves as a reference material for scientists in geology, geophysics and environmental studies
  46. 1 point
    Hi, Please check out this tool - https://www.whatiswhere.com, which can be very useful in your research. Features: * OpenStreetMap based search which allows you to apply more than 1 criteria at once * Negative conditions (e.g. you could search for areas where some type of POI does not exist) * Access to global postal code information * EXPORT RESULTS TO CSV, which can be then uploaded to your GIS * Re-use of search projects Thanks, Andrei, WhatIsWhere www.whatiswhere.com
  47. 1 point
    The Klencke Atlas is one of the world's biggest: it measures 176 x 231 cm when open. It takes its name from Joannes Klencke, who presented it to Charles II on his restoration to the British thrones in 1660. Its size and its 40 or so large wall maps from the Golden Age of Dutch mapmaking were supposed to suggest that it contained all the knowledge in the world. At another level, it was a bribe intended to spur the King into granting Klencke and his associates trading privileges and titles. Charles, who was a map enthusiast, appreciated the gift. He placed the atlas with his most precious possessions in his cabinet of curiosities, and Klencke was knighted. Later generations have benefited too. The binding has protected the wall maps which have survived for us to enjoy - unlike the vast majority of other wall maps which, exposed to light, heat and dirt when hung on walls, have crumbled away. visit : https://www.bl.uk/collection-items/klencke-atlas
  48. 1 point
    I getting some images of a oil palm estate taken from a DIY drone. The image is stitch and mosaic. Has anyone done automated tree counting on these images. Seen some examples using eCognition but that was with multi-spectral images.
  49. 1 point
    Oil palm tree counting is a quite simple application. You can simply use following methods: 1. Ravine extraction (Honda Kyioshi): Based on the fact that the top canopy is the brightest point. The ravine (or the valley between 2 trees) looks darker. Doing it, you wil be able to locate the central point of each tree. Then counting is some simple works to do aftewards. 2. Local threshold texture matching: Requires some programming tasks. Get the individual trees as samplings (with different sizes of canopy) and then match them over the whole image. The matching algorithm can be image correlation or histograming matching. I already did it with VB programming and it works perfectly. By doing this, you can also be able to map individual trees. All of this is based on RGB image. For better results, mask the plantation areas before running the procedure. Of course there are several other options to do. If you want to operate the procedure in an automatic way, the programming skill is needed.
  50. 1 point
    look at maxmax website, they do NIR conventions on cameras that can fit on drones. Did a canon compact with them 100%. Try imageJ (free software), run the NDVI settings you will get some data. Or use a blue filter on your camera, you will get usable images for NDVI. I got good data with dessert palms from RGB images in eCognition, but you have o play with the analysis a bit! Having a NIR layer will make life easy for you. Good Luck
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.