Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 09/15/2018 in all areas

  1. 4 points
    Here is an interesting review: http://www.50northspatial.org/uav-image-processing-software-photogrammetry/ 😉😊
  2. 3 points
    found this interesting tutorial : For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us. Initially, the results were pretty bad. With PostgreSQL 10, it was possible to force some parallel queries by jimmying with global cost parameters, but nothing would execute in parallel out of the box. With PostgreSQL 11, we got support for parallel aggregates, and those tended to parallelize in PostGIS right out of the box. However, parallel scans still required some manual alterations to PostGIS function costs, and parallel joins were basically impossible to force no matter what knobs you turned. With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of: Parallel sequence scans, Parallel aggregates, and Parallel joins!! TL;DR: PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions. What Changed With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour. For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively. Setup In order to run these tests yourself, you will need: PostgreSQL 12 PostGIS 3.0 You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best. The setup instructions show where to download the Canadian polling division data used for the testing: pd a table of ~70K polygons pts a table of ~70K points pts_10 a table of ~700K points pts_100 a table of ~7M points We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes. When max_parallel_workers_per_gather is set to 0, parallel plans are not an option. max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2. Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker. show max_worker_processes; show max_parallel_workers; show max_parallel_workers_per_gather; Aggregates Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT Sum(ST_Area(geom)) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. Scans The simplest spatial parallel scan adds a spatial function to the target list or filter clause. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT ST_Area(geom) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11. Gather (cost=1000.00..27361.20 rows=69534 width=8) Workers Planned: 3 -> Parallel Seq Scan on pd (cost=0.00..19407.80 rows=22430 width=8) Joins Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get: SET max_parallel_workers_per_gather = 4; EXPLAIN SELECT * FROM pd JOIN pts_100 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..837378459.28 rows=5322553884 width=2579) Workers Planned: 4 -> Nested Loop (cost=0.28..305122070.88 rows=1330638471 width=2579) -> Parallel Seq Scan on pts_100 pts (cost=0.00..75328.50 rows=1738350 width=40) -> Index Scan using pd_geom_idx on pd (cost=0.28..175.41 rows=7 width=2539) Index Cond: (geom && pts.geom) Filter: st_intersects(geom, pts.geom) The only quirk in this plan is that the nested loop join is being driven by the pts_100 table, which has 10 times the number of records as the pd table. The plan for a query against the pt_10 table also returns a parallel plan, but with pd as the driving table. EXPLAIN SELECT * FROM pd JOIN pts_10 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..85251180.90 rows=459202963 width=2579) Workers Planned: 3 -> Nested Loop (cost=0.29..39329884.60 rows=148129988 width=2579) -> Parallel Seq Scan on pd (cost=0.00..13800.30 rows=22430 width=2539) -> Index Scan using pts_10_gix on pts_10 pts (cost=0.29..1752.13 rows=70 width=40) Index Cond: (geom && pd.geom) Filter: st_intersects(pd.geom, geom) source: http://blog.cleverelephant.ca/2019/05/parallel-postgis-4.html
  3. 3 points
    Hello everyone ! This is a quick Python code which I wrote to batch download and preprocess Sentinel-1 images of a given time. Sentinel images have very good resolution and makes it obvious that they are huge in size. Since I didn’t want to waste all day preparing them for my research, I decided to write this code which runs all night and gives a nice image-set in following morning. import os import datetime import gc import glob import snappy from sentinelsat import SentinelAPI, geojson_to_wkt, read_geojson from snappy import ProductIO class sentinel1_download_preprocess(): def __init__(self, input_dir, date_1, date_2, query_style, footprint, lat=24.84, lon=90.43, download=False): self.input_dir = input_dir self.date_start = datetime.datetime.strptime(date_1, "%d%b%Y") self.date_end = datetime.datetime.strptime(date_2, "%d%b%Y") self.query_style = query_style self.footprint = geojson_to_wkt(read_geojson(footprint)) self.lat = lat self.lon = lon self.download = download # configurations self.api = SentinelAPI('scihub_username', 'scihub_passwd', 'https://scihub.copernicus.eu/dhus') self.producttype = 'GRD' # SLC, GRD, OCN self.orbitdirection = 'ASCENDING' # ASCENDING, DESCENDING self.sensoroperationalmode = 'IW' # SM, IW, EW, WV def sentinel1_download(self): global download_candidate if self.query_style == 'coordinate': download_candidate = self.api.query('POINT({0} {1})'.format(self.lon, self.lat), date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) elif self.query_style == 'footprint': download_candidate = self.api.query(self.footprint, date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) else: print("Define query attribute") title_found_sum = 0 for key, value in download_candidate.items(): for k, v in value.items(): if k == 'title': title_info = v title_found_sum += 1 elif k == 'size': print("title: " + title_info + " | " + v) print("Total found " + str(title_found_sum) + " title of " + str(self.api.get_products_size(download_candidate)) + " GB") os.chdir(self.input_dir) if self.download: if glob.glob(input_dir + "*.zip") not in [value for value in download_candidate.items()]: self.api.download_all(download_candidate) print("Nothing to download") else: print("Escaping download") # proceed processing after download is complete self.sentinel1_preprocess() def sentinel1_preprocess(self): # Get snappy Operators snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() # HashMap Key-Value pairs HashMap = snappy.jpy.get_type('java.util.HashMap') for folder in glob.glob(self.input_dir + "\*"): gc.enable() if folder.endswith(".zip"): timestamp = folder.split("_")[5] sentinel_image = ProductIO.readProduct(folder) if self.date_start <= datetime.datetime.strptime(timestamp[:8], "%Y%m%d") <= self.date_end: # add orbit file self.sentinel1_preprocess_orbit_file(timestamp, sentinel_image, HashMap) # remove border noise self.sentinel1_preprocess_border_noise(timestamp, HashMap) # remove thermal noise self.sentinel1_preprocess_thermal_noise_removal(timestamp, HashMap) # calibrate image to output to Sigma and dB self.sentinel1_preprocess_calibration(timestamp, HashMap) # TOPSAR Deburst for SLC images if self.producttype == 'SLC': self.sentinel1_preprocess_topsar_deburst_SLC(timestamp, HashMap) # multilook self.sentinel1_preprocess_multilook(timestamp, HashMap) # subset using a WKT of the study area self.sentinel1_preprocess_subset(timestamp, HashMap) # finally terrain correction, can use local data but went for the default self.sentinel1_preprocess_terrain_correction(timestamp, HashMap) # break # try this if you want to check the result one by one def sentinel1_preprocess_orbit_file(self, timestamp, sentinel_image, HashMap): start_time_processing = datetime.datetime.now() orb = self.input_dir + "\\orb_" + timestamp if not os.path.isfile(orb + ".dim"): parameters = HashMap() orbit_param = snappy.GPF.createProduct("Apply-Orbit-File", parameters, sentinel_image) ProductIO.writeProduct(orbit_param, orb, 'BEAM-DIMAP') # BEAM-DIMAP, GeoTIFF-BigTiff print("orbit file added: " + orb + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + orb) def sentinel1_preprocess_border_noise(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() border = self.input_dir + "\\bordr_" + timestamp if not os.path.isfile(border + ".dim"): parameters = HashMap() border_param = snappy.GPF.createProduct("Remove-GRD-Border-Noise", parameters, ProductIO.readProduct(self.input_dir + "\\orb_" + timestamp + ".dim")) ProductIO.writeProduct(border_param, border, 'BEAM-DIMAP') print("border noise removed: " + border + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + border) def sentinel1_preprocess_thermal_noise_removal(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() thrm = self.input_dir + "\\thrm_" + timestamp if not os.path.isfile(thrm + ".dim"): parameters = HashMap() thrm_param = snappy.GPF.createProduct("ThermalNoiseRemoval", parameters, ProductIO.readProduct(self.input_dir + "\\bordr_" + timestamp + ".dim")) ProductIO.writeProduct(thrm_param, thrm, 'BEAM-DIMAP') print("thermal noise removed: " + thrm + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + thrm) def sentinel1_preprocess_calibration(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() calib = self.input_dir + "\\calib_" + timestamp if not os.path.isfile(calib + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) calib_param = snappy.GPF.createProduct("Calibration", parameters, ProductIO.readProduct(self.input_dir + "\\thrm_" + timestamp + ".dim")) ProductIO.writeProduct(calib_param, calib, 'BEAM-DIMAP') print("calibration complete: " + calib + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + calib) def sentinel1_preprocess_topsar_deburst_SLC(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() deburst = self.input_dir + "\\dburs_" + timestamp if not os.path.isfile(deburst): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) deburst_param = snappy.GPF.createProduct("TOPSAR-Deburst", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(deburst_param, deburst, 'BEAM-DIMAP') print("deburst complete: " + deburst + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + deburst) def sentinel1_preprocess_multilook(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() multi = self.input_dir + "\\multi_" + timestamp if not os.path.isfile(multi + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) multi_param = snappy.GPF.createProduct("Multilook", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(multi_param, multi, 'BEAM-DIMAP') print("multilook complete: " + multi + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + multi) def sentinel1_preprocess_subset(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() subset = self.input_dir + "\\subset_" + timestamp if not os.path.isfile(subset + ".dim"): WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader') # converting shapefile to GEOJSON and WKT is easy with any free online tool wkt = "POLYGON((92.330290184197 20.5906091141114,89.1246637610338 21.6316051481971," \ "89.0330319081811 21.7802436586492,88.0086282580443 24.6678836192818,88.0857830091018 " \ "25.9156771178278,88.1771488779853 26.1480664053835,88.3759125970998 26.5942658997298," \ "88.3876586919721 26.6120432770312,88.4105534167129 26.6345128356038,89.6787084683935 " \ "26.2383305017275,92.348481691233 25.073636976939,92.4252199249342 25.0296592837972," \ "92.487261172615 24.9472465376954,92.4967290851295 24.902213855393,92.6799861774377 " \ "21.2972058618174,92.6799346581579 21.2853347419811,92.330290184197 20.5906091141114))" geom = WKTReader().read(wkt) parameters = HashMap() parameters.put('geoRegion', geom) subset_param = snappy.GPF.createProduct("Subset", parameters, ProductIO.readProduct(self.input_dir + "\\multi_" + timestamp + ".dim")) ProductIO.writeProduct(subset_param, subset, 'BEAM-DIMAP') print("subset complete: " + subset + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + subset) def sentinel1_preprocess_terrain_correction(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() terr = self.input_dir + "\\terr_" + timestamp if not os.path.isfile(terr + ".dim"): parameters = HashMap() # parameters.put('demResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('imgResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('pixelSpacingInMeter', 10.0) terr_param = snappy.GPF.createProduct("Terrain-Correction", parameters, ProductIO.readProduct(self.input_dir + "\\subset_" + timestamp + ".dim")) ProductIO.writeProduct(terr_param, terr, 'BEAM-DIMAP') print("terrain corrected: " + terr + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + terr) input_dir = "path_to_project_folder\Sentinel_1" start_date = '01Mar2019' end_date = '10Mar2019' query_style = 'footprint' # 'footprint' to use a GEOJSON, 'coordinate' to use a lat-lon footprint = 'path_to_project_folder\bd_bbox.geojson' lat = 26.23 lon = 88.56 sar = sentinel1_download_preprocess(input_dir, start_date, end_date, query_style, footprint, lat, lon, True) # proceed to download by setting 'True', default is 'False' sar.sentinel1_download() The geojson file is created from a very generalised shapefile of Bangladesh by using ArcGIS Pro. There are a lot of free online tools to convert shapefile to geojson and WKT. Notice that the code will skip download if the file is already there but will keep the processing on, so comment out line 197 when necessary. Updated the code almost completely. The steps of processing raw files of Sentinel-1 used here are not the most generic way, note that there are no authentic way for this. Since different research require different steps to prepare raw data, you will need to follow yours. Also published at clubgis.
  4. 3 points
    News release April 1, 2019, Saint-Hubert, Quebec – The Canadian Space Agency and the Canada Centre for Mapping and Earth Observation are making RADARSAT-1 synthetic aperture radar images of Earth available to researchers, industry and the public at no cost. The 36,500 images are available through the Government of Canada's Earth Observation Data Management System. The RADARSAT-1 dataset is valuable for testing and developing techniques to reveal patterns, trends and associations that researchers may have missed when RADARSAT-1 was in operation. Access to these images will allow Canadians to make comparisons over time, for example, of sea ice cover, forest growth or deforestation, seasonal changes and the effects of climate change, particularly in Canada's North. This image release initiative is part of Canada's Open Government efforts to encourage novel Big Data Analytic and Data Mining activities by users. Canada's new Space Strategy places priority on acquiring and using space-based data to support science excellence, innovation and economic growth. Quick facts The RADARSAT Constellation Mission, scheduled for launch in May 2019, builds on the legacy of RADARSAT-1 and RADARSAT-2, and on Canada's expertise and leadership in Earth observation from space. RADARSAT-1 launched in November 1995. It operated for 17 years, well over its five-year life expectancy, during which it orbited Earth 90,828 times, travelling over 2 billion kilometres. It was Canada's first Earth observation satellite. RADARSAT-1 images supported relief operations in 244 disaster events. RADARSAT-2 launched in December 2007 and is still operational today. This project represents a unique collaboration between government and industry. MDA, a Maxar company, owns and operates the satellite and ground segment. The Canadian Space Agency helped to fund the construction and launch of the satellite. It recovers this investment through the supply of RADARSAT-2 data to the Government of Canada during the lifetime of the mission. Users can download these images through the Earth Observation Data Management System of the Canada Centre for Mapping and Earth Observation, a division of Natural Resources Canada (NRCan). NRCan is responsible for the long-term archiving and distribution of the images as well as downlinking of satellite data at its ground stations. source: https://www.canada.ca/en/space-agency/news/2019/03/open-data-over-36000-historical-radarsat-1-satellite-images-of-the-earth-now-available-to-the-public.html
  5. 3 points
    premium web application for ArcGIS Enterprise 10.7 that provides users with tools and capabilities in a project-based environment that streamlines image analysis and structure observation management. Interested in working with imagery in a modern, web-based experience? Here’s a look at some of the features ArcGIS Excalibur 1.0 has to offer: Search for Imagery ArcGIS Excalibur makes it easy to search and discover imagery available to you within your organization through a number of experiences. You can connect directly to an imagery layer, an image service URL, or even through the imagery catalog search. The imagery catalog search allows you to quickly search for imagery layers over areas of interest to discover and queue images for further use. Work with imagery Once you have located the imagery of interest, you can easily connect to the imagery exploitation canvas where you can utilize a wide variety of tools to begin working with your imagery. The imagery exploitation canvas allows you to view your imagery on top of a default basemap where the imagery is automatically orthorectified and aligned with the map. The exploitation canvas also enables you to simultaneously view the same image in a more focused manner as it was captured in its native perspective. Display Tools Optimizing imagery to get the most value out of each image pixel is a breeze with ArcGIS Excalibur display tools. The image display tools include image renderers, filters, the ability to change band combinations, and even apply settings like DRA and gamma. Settings to change image transparency and compression are also included. Exploitation Tools Ever need to highlight key areas of interest through mark up, labeling, and measurement? Through the mark-up tools, you can create simple graphics on top of your imagery using text and shape elements to call attention to areas of interest through outline, fill, transparency, and much more. The measurements tool allows you to measure horizontal and vertical distances, areas, and feature locations on an image. Export Tools The exploitation results saved in an image project can be easily shared using the export tools. The create presentation tool exports your current view directly to a Microsoft PowerPoint presentation, along with the metadata of the imagery. Introducing an Imagery Project ArcGIS Excalibur also introduces the concept of an imagery project to help streamline imagery workflows by leveraging the ArcGIS platform. An ArcGIS Excalibur imagery project is a dynamic way to organize resources, tools, and workflows required to complete an image-based task. An imagery project can contain geospatial reference layers and a set of tools for a focused image analysis and structured observation management workflows. Content created within imagery projects can be shared and made available to your organization to leverage in downstream analysis and shared information products.
  6. 3 points
    Details geological-geophysical aspects of groundwater treatment Discusses regulatory legislations regarding groundwater utilization Serves as a reference material for scientists in geology, geophysics and environmental studies
  7. 2 points
    not necessary to excel environment but : https://github.com/orbisgis/h2gis/wiki/4.2-LibreOffice
  8. 2 points
    This is an interesting topic from not quite an old webpage. I was searching for some use case of blockchain in geospatial context and found this. The contexts still challenging, but very noteworthy. What is a blockchain and how is it relevant for geospatial applications? (By Jonas Ellehauge, awesome map tools, Norway) A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Blockchain technology is already being used in some geospatial applications, as explained here. As an immutable registry for transactions of digital tokens, blockchain is suitable for geospatial applications involving data that is sensitive or a public good, autonomous devices and smart contracts. Use Cases The use cases are discussed further below. I have given a few short talks about this topic at various conferences, most recently at the international FOSS4G conference in Bonn, Germany, 2016. Public-good data Open Data is Still Centralised Data Over the past two decades, I have seen how ‘public-good’ geospatial data has generally become much easier to get hold of, having originally been very inaccessible to most people. Gradually, the software to display and process the data became cheaper or even free, but the data itself – data that people had already paid for through their taxes – remained inaccessible. Some national mapping institutions and cadastres began distributing the data via the internet, although mostly with a price tag. Only in recent years have a few countries in Europe made public map data freely accessible. In the meantime, projects like OpenStreetMap have emerged in order to meet people’s need for open data. It is hardly a surprise, then, that a myriad of new apps, mock-ups and business cases emerge in a region shortly after data is made available to the public there. Truly Public Open Data One of the reasons that this data has remained inaccessible for so long is that it is collected and distributed through a centralised organisation. A small group of people manage enormous repositories of geospatial data and can restrict or grant access to it. As I see it, this is where blockchain and related technologies like IPFS can enable people to build systems where the data is inherently public, no one controls it, anyone can access it, and anyone can review the full history of contributions to the data. Would it be free of charge to use data from such a system? Who would pay for it? I guess time will tell which business model is the most sustainable in that respect. OpenStreetMap is free to use, it is immensely popular and yet people gladly contribute to it – so who pays the cost for OSM? Bear in mind that there’s no such thing as ‘free data’. For example, the ‘free’ open data in Denmark today is paid for through taxes. So, even if it would cost a little to use the blockchain-based data, that wouldn’t be so different from now – just that no one would be able to restrict access to the data, plus the open nature of competing nodes and contributors will minimise the costs. Autonomous Devices & Apps Uber and Airbnb are examples of consumer applications that rely on geospatial data and processing. They represent a centralised approach where the middleman owns and controls the data and charges a significant fee for connecting clients and providers with each other. If such apps were replaced by distributed peer-to-peer systems, they could be cheaper and give their users full control of their data. There is already such an alternative to Uber called Arcade.City. A peer-to-peer market app like OpenBazar may also benefit from geospatial components with regards to e.g. search and logistics. Such autonomous apps may currently have to rely on third parties for their geospatial components – e.g. Google Maps, Mapbox, OpenStreetMap, etc. With access to truly publicly distributed data as described above, such apps would be even more reliable and cheaper to run. An autonomous device such as a drone or a self-driving car inherently runs an autonomous application, so these two concepts are heavily intertwined. There’s no doubt that self-navigating cars and drones will be a growing market in the near future. Uber and Tesla have big ambitions regarding cars, drones are being designed for delivery of consumer products (Amazon), and drone-based emergency response (drone defibrillator) and imaging (automatic selfie drone ‘Lily’) applications are emerging. Again, distributed peer-to-peer apps could cut out the middleman and reliance on third parties for their navigation and other geospatial components. Land Ownership What is Property? After some years in the GIS software industry, I realised that a very large part of my work revolved around cadastres/parcels and other administrative borders plus technical base maps featuring roads, buildings, etc. In view of my background in physical geography I thought that was pretty boring stuff and I dreamt about creating maps and applications that involved temperatures, wind, currents, salinity, terrain models, etc., because it felt more ‘real’. I gradually realised that something about administrative data was nagging me – as if it didn’t actually represent reality. Lately, I have taken an interest in philosophy about human interaction, voluntary association and self-ownership. It turns out that property is a moral, philosophical concept of assets acquired through voluntary transactions or homesteading. This perspective stretches at least as far back as John Locke in the 17th century. Such justly acquired property is reality, whereas law, governance services and computer code are systems that attempt to model reality. When such systems don’t fit reality, the system is wrong and should be dismissed, possibly adjusted or replaced. Land Ownership For the vast majority of people in many developing countries, there is no mapping of parcels or proof of ownership available to the actual landowners. Christiaan Lemmen, an expert on cadastres, has experience from field work to map parcels in developing countries such as Nigeria, Liberia, etc., where corruption can be a big challenge within land administration. In his experience, however, people mostly agree on who owns what in their local communities. These people often have a need for proof of identity and proof of ownership for their justly acquired land in order to generate wealth, invest in their future and prevent fraud – while they often face problems with inefficient, expensive or corrupt government services. Ideally, we could build inexpensive, reliable and easy-to-use blockchain-based systems that will enable people to map and register their land together with their neighbours – without involving any government officials, lawyers or other middlemen. Geodesic Grids It has been suggested to use geodesic grids of discrete cells to register land ownership on a blockchain. Such cells can be shaped, e.g. as squares, triangles, pentagons, hexagons, etc., and each cell has a unique identifier. In a traditional cadastral system, parcels are represented with flexible polygons, which allows users to register any possible shape of a parcel. Although a grid of discrete cells doesn’t allow such flexible polygons, it has an advantage in this case: each digital token on the blockchain (let’s call it a ‘Landcoin’) can represent one unique cell in the grid. Hence, whoever owns a particular Landcoin owns the corresponding piece of land. Owning such a Landcoin means possessing the private encryption key that controls it – which is how other cryptocurrencies work. In order to represent complex and high-resolution geometries, it is preferable to use a grid which is infinitely sub-divisible so that ever-smaller triangles, hexagons or squares, etc., can be tied together to represent any piece of land. A digital token can also be infinitely sub-divisible. For comparison, the smallest unit of a Bitcoin is currently a 100-millionth – aka a ‘Satoshi’. If needed, the core software could be upgraded to support even smaller units. What is a Blockchain? A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Safer – because no one controls all the data (known as root privilege in existing databases). Each entry has its own pair of public and private encryption keys and only the holder of the private key can unlock the entry and transfer it to someone else. Immutable – because each block of entries (added every 1-10 minutes) carries a unique hash ‘fingerprint’ of the previous block. Hence, older blocks cannot be tampered with. Cheaper – because anyone can set up a node and get paid in digital tokens (e.g. Bitcoin or Ether) for hosting a blockchain. This ensures that competition between nodes will minimise the cost of hosting it. It also saves the costs of massive security layers that otherwise apply to servers with sensitive data – this is because of the no-root-privilege security model and, with old entries being immutable, there’s little need to protect them. Resilient – because there is no single point of failure, there’s practically nothing to attack. In order to compromise a blockchain, you’d have to hack each individual user one by one in order to get hold of their private encryption keys that give access to that user’s data only. Another option is to run over 50% of the nodes, which is virtually impossible and economically impractical. Transparency and accountability – the fact that existing entries cannot be tampered with makes a blockchain a transparent source of truth and history for your application. The public nature of it makes it easy to hold people accountable for their activities. Control – the immutable and no-root-privilege character puts each user in full control of his/her own data using the private encryption keys. This leads to real peer-to-peer interaction without any middleman and without an administrator that can deny users access to their data. Trustless – because each user fully controls his/her own data, users can safely interact without knowing or trusting each other and without any trusted third parties. Smart Contracts and DAPPs A blockchain can be more than a passive registry of entries or transactions. The original Bitcoin blockchain supports limited scripting allowing for programmable transactions and smart contracts – e.g. where specified criteria must be fulfilled leading to transactions automatically taking place. Possibly the most popular alternative to Bitcoin is Ethereum, which is a multi-purpose blockchain with a so-called ‘Turing complete’ programming interface, which allows developers to create virtually any imaginable application on this platform. Such applications are referred to as decentralised autonomous applications (DAPPs) and are virtually impossible for third parties to stop or censor. [1] IFPS IPFS is a distributed file system and web protocol, which can complement or even replace HTTP. Instead of referring to files by their location on a host or IP address, it refers to files by their content. This means that when requested, IPFS will return the content from the nearest possible or even multiple computers rather than from a central server. That could be on the computer next to you, on your local network or somewhere in the neighbourhood. Jonas Ellehauge is an expert on geospatial software, GIS and web development, enthusiastic about open source, Linux and UI/UX. Ellehauge is passionate about science, philosophy, entrepreneurship, economy and communication. His background in physical geography provides extensive knowledge of spatial analyses and spatial problem solving.
  9. 2 points
    multifunction casing. you can run 3d games and grating cheese for your hamburger. excelent thought apple as always LOL
  10. 2 points
    We are all already familiar with GPS navigation outdoors and what wonders it does not only for our everyday life, but also for business operations. Outdoor maps, allowing for navigation via car or by foot, have long helped mankind to find even the most remote and hidden places. Increased levels of efficiency, unprecedented levels of control over operational processes, route planning, monitoring of deliveries, safety and security regulations and much more have been made possible. Some places are, however, harder to reach and navigate than others. For instance, places like big indoor areas – universities, hospitals, airports, convention centers or factories, among others. Luckily, that struggle is about to become a thing of the past. So what’s the solution for navigating through and managing complex indoor buildings? Indoor Mapping and Visualization with ArcGIS Indoors The answer is simple – indoor mapping. Indoor mapping is a revolutionary concept that visualizes an indoor venue and spatial data on a digital 2D or 3D map. Showing places, people and assets on a digital map enables solutions such as indoor positioning and navigation. These, in turn, allow for many different use cases that help companies optimize their workflows and efficiencies. Mobile Navigation and Data The idea behind this solution is the same as outdoor navigation, only instead it allows you to see routes and locate objects and people in a closed environment. As GPS signals are not available indoors, different technology solutions based on either iBeacons, WiFi or lighting are used to create indoor maps and enable positioning services. You can plan a route indoors from point A to point B with customized pins and remarks, analyze whether facilities are being used to their full potential, discover new business opportunities, evaluate user behaviors and send them real-time targeted messages based on their location, intelligently park vehicles, and the list goes on! With the help of geolocation, indoor mapping stores and provides versatile real-time data on everything that is happening indoors, including placements and conditions of assets and human movements. This allows for a common operating picture, where all stakeholders share the same level of information and insights into internal processes. Having a centralized mapping system enables effortless navigation through all the assets and keeps facility managers updated on the latest changes, which ultimately improves business efficiency. Just think how many operational insights can be received through visualizations of assets on your customized map – you can monitor and analyze the whole infrastructure and optimize the performance accordingly. How to engage your users/visitors at the right time and place? What does it take to improve security management? Are the workflow processes moving seamlessly? Answers to those and many other questions can be found in an indoor mapping solution. Interactive indoor experiences are no longer a thing of the future, they are here and now. source: https://www.esri.com/arcgis-blog/products/arcgis-indoors/mapping/what-is-indoor-mapping/
  11. 2 points
    Hi evrybody I'm an italian architect, dealing few times with GIS related topics. I also like to draw my own seamaps for my Chartplotter.
  12. 2 points
    SarVision was created in 2000, as a spin-off from Wageningen University (WUR) in the Netherlands. SarVision pioneers the operational application of systematic satellite monitoring and mapping systems for environmental and natural resource management. Our innovative systems provide our partners with the latest maps and information on agriculture and land use, forest cover change, fire and hydrology. Our inhouse cutting edge radar technology, which « sees » through clouds, smoke and haze, enables continuous land surface monitoring, updating data on a continuous basis (bi-weekly to yearly). SarVision contributes to numerous sustainable development efforts in tropical regions around the globe, working directly with organisations as diverse as space agencies, multilateral institutions, government agencies, local community associations, farmers, agribusiness, logging and plantation companies, nature conservation organisations, oil and gas companies, universities and insurance companies. Job description We are looking for a remote sensing expert to join our team. Together with SarVision experts, you will contribute to the development and implementation of operational services in the areas of agriculture, water, forest and land use mapping and monitoring. You will have the opportunity to apply and further develop your skills in: • The processing of satellite images: pre-processing tasks, image classification using in-house and external software packages; • GIS: quality control and validation, data analysis and presentation, integration of multiple data sources; • IT and programming: automation of processing tasks and processing chains from data acquisition to delivery of final product. You will mainly work in a team with SarVision remote sensing experts, but also carry out operational tasks autonomously. Requirements • A Bachelor or Master’s Degree with main focus on Remote Sensing, Geoinformatics, Geography, Agriculture, Forestry or related area of expertise; • Professional experience in a remote sensing company would be beneficial; • Ability to work in complex, multi-task team situation; • Willingness and ability to learn new skills quickly; • Ability to work under time pressure and respect deadlines, keeping track of long term objectives; • Ability to travel occasionally to developing countries; • Very good English language skills, Dutch and/or Spanish advantageous. Technical skills: • Remote sensing background; • Experience in image processing for agriculture, forest, and land cover/land use applications; • Knowledge in statistical analyses (sampling design, accuracy assessment); • Programming skills: experience/knowledge of Python, GDAL; IDL, Matlab, R, C++, Java: advantageous • Experience with Linux and Bash: advantageous • Experience with QGIS, PostGIS: advantageous; • Radar data processing and machine learning skills: advantageous. Duration & starting date We offer a fix-term contract of 1 year, with possibility of extension. Starting date as soon as possible. How to apply? Send a CV and motivation letter in English to Wilbert van Rooij ([email protected]) before June 25th 2019. www.sarvision.nl
  13. 2 points
    Topcon Positioning Group’s Dave Henderson offers a rundown on the company’s latest products, including the Falcon 8+ drone, Sirius Pro, MR-2 modular receiver, and B210 and B125 receiver boards, at Xponential 2019. source: https://www.gpsworld.com/topcon-showcases-falcon-8-drone-sirius-pro-and-receiver-boards-at-xponential-2019/
  14. 2 points
    Klau Geomatics has released Real-Time Precise Point Positioning (PPP) for aerial mapping and drone positioning that enables 3 to 5 cm initial positioning accuracy, anywhere in the world, without any base station data or network corrections. With this, you Just need to fly your drone at any distance, anywhere. The system allows to navigate with real-time cm level positioning or geotag your mapping photos and Lidar data. You don’t need to think about setting up a base station, finding quality CORS data or setting up an RTK radio link. You don’t need to be in range of a CORS station, you can fly autonomously, in remote areas, long corridors, unlimited range, it just works, giving you centimetre level accuracy, anywhere. Now, with this latest satellite-based positioning technology, 3 to 5cm accuracy can be achieved, anywhere in the world, with no base station. KlauPPP leverages NovAtel’s industry-leading technology to achieve this quantum leap in PPP accuracy. NovAtel PPP and Klau Geomatics hardware/software system is now the simplest, most convenient and accurate positioning system for UAVs and manned aircraft. The bundled solution enables accurate positioning in any published or custom coordinate system and datum. This technology is very applicable to surveying, mapping, navigation and particularly the emerging drone inspection industry, starting to realize that absolute accuracy is essential to analyze change over time in 3D assets. A BVLOS parcel delivery drone can now travel across a country and arrive exactly on it’s landing pad. No range limitations, no base station requirements or radio links. Highly accurate autonomous flight. Large scale enterprise drone companies can deploy their fleet of operators with a simple, mechanical workflow to capture accurate, repeatable data, without the complications of the survey world; of RTK radio links and network connections or logging base station data within a range of each of their many projects. Now they have a simple consistent operation that just works, every time, every location. “Just as Klau Geomatics led the industry from RTK and GCPs to PPK, we now lead the charge to PPP as the next technology for simple, accurate drone operations”, says Rob Klau, Director of Klau Geomatics source : http://geomatics.com.au/
  15. 2 points
    https://grasswiki.osgeo.org/wiki/Global_datasets Raster data Elevation data ASTER topography (GDEM V1) Improved ASTER GDEM 1 from 2009: GDEM global 30m elevation calculated from stereo-pair images collected by the Terra satellite. "This is the most complete, consistent global digital elevation data yet made available to the world." This is a very new dataset, at version 1 (treat as experimental). Accuracy will be improved in forthcoming versions (validation with SRTM, etc.; see assessment here and here). pre-release announcement NASA press release Warehouse Inventory Search Tool or Easy search tool (Data download) Tutorial: ASTER topography See also: ASTER GDEM 30m quality assessment ASTER topography (GDEM V2) Improved ASTER GDEM 2 from 2011: https://lpdaac.usgs.gov/products/aster_products_table/astgtm The ASTER GDEM covers land surfaces between 83°N and 83°S and is comprised of 22,702 tiles. Tiles that contain at least 0.01% land area are included. The ASTER GDEM is distributed as Geographic Tagged Image File Format (GeoTIFF) files with geographic coordinates (latitude, longitude). The data are posted on a 1 arc-second (approximately 30–m at the equator) grid and referenced to the 1984 World Geodetic System (WGS84)/ 1996 Earth Gravitational Model (EGM96) geoid. Notes: this DEM can be rather well filtered and smoothed with the Sun's denoising algorithm (using GDAL and free / open source program <mdenoise> or simply GRASS add-on r.denoise. Experiments showed that the best smoothing of ASTER GDEM 2 is reached with such parameters of <mdenoise>: threshold = 0.8 iterations = 10-20 Also filtering with r.neighbors by "average" method and window size >=5 is quite useful to remove some noise from DEM. See also: Validation of the ASTER Global Digital Elevation Model Version 2 over the Conterminous United States ACE2 The ACE2 Global Digital Elevation Model is available at 3", 30" and 5' spatial resolutions. http://tethys.eaprs.cse.dmu.ac.uk/ACE2/ Import example: r.in.bin -f input="00N105E_3S.ACE2" output="ACE2_00N105E" bytes=4 \ order="native" north=15 south=0 east=120 west=105 \ rows=18000 cols=18000 CleanTOPO2 (DEM) CleanTOPO2 download: Edited SRTM30 Plus World Elevation Data Import in GRASS: r.in.gdal CleanTOPO2.tif out=cleanTOPO2.tmp -l -o g.region rast=cleanTOPO2 -p -g # rescale from odd integer values to true world values r.rescale cleanTOPO2.tmp out=cleanTOPO2 to=-10701,8248 r.colors cleanTOPO2_final col=terrain Rescaled ClearTOPO2 map EGM2008 Geoid Data (Earth Gravitational Model) Global 2.5 Minute Geoid Undulations: download GIS Format at http://earth-info.nga.mil/GandG/wgs84/gravitymod/egm2008/egm08_gis.html Geoid undulations in Trentino, Italy Verifications of points can be done with the http://geographiclib.sourceforge.net/cgi-bin/GeoidEval ETOPO (DEM) The ETOPO datasets provide global topography and bathymetry at 1', 2', and 5' per-cell resolutions. ETOPO1 (DEM) http://www.ngdc.noaa.gov/mgg/global/ The cell registered version can be loaded directly into a lat/lon location. GRASS raster data is cell registered (see the GRASS raster semantics page) Special care must be taken with the grid registered version. It can not be loaded directly into a lat/lon location as the parameters found in the .hdr file exceed the limits of polar coordinate space: they have N,S rows which go 1/2 a cell beyond 90 latitude, when considered in the cell registered convention. So the data needs to have those 90deg N,S rows cropped away, and while we're at it we crop away a redundant overlapping column at 180 longitude. To do this we have to first tell the GIS a little fib during import to squeeze the data into lat/lon space, then crop away the spurious rows and column, then finally reset the resulting map's bounds to its true extent. # Import grid registered binary float, fibbing about its true extent r.in.bin -f in=etopo1_bed_g.flt out=etopo1_bed_g.raw \ n=90 s=-90 e=180 w=-180 rows=10801 cols=21601 anull=-9999 # reduce the working region by 1 cell g.region rast=etopo1_bed_g.raw eval `g.region -g` g.region n=n-$nsres s=s+$nsres e=e-$ewres -p # save smaller raster and remove original r.mapcalc "etopo1_bed_g.crop = etopo1_bed_g.raw" g.remove etopo1_bed_g.raw # re-establish the correct bounds, now that they'll fit r.region etopo1_bed_g.crop n=89:59:30N s=89:59:30S w=179:59:30E e=179:59:30E g.region rast=etopo1_bed_g.crop # check that N,S,E,W and Res are all nice and clean: r.info etopo1_bed_g.crop # looks good, so accept the results by resetting the map name g.rename etopo1_bed_g.crop,etopo1_bed_g # set to use appropriate color rules r.colors etopo1_bed_g color=etopo2 # set the 'units' metadata field (for elevation data contained within the map) r.support etopo1_bed_g units=meters For the problematic grid registered version, the resulting r.info report should look like: | Rows: 10799 | | Columns: 21600 | | Total Cells: 233258400 | | Projection: Latitude-Longitude | | N: 89:59:30N S: 89:59:30S Res: 0:01 | | E: 179:59:30E W: 179:59:30E Res: 0:01 | | Range of data: min = -10898 max = 8271 | (the east and west bounds of the map touch 1/2 a cell west of 180 longitude) For the problematic grid registered version, since the data's grid is 1/2 a cell shifted from nicely rounded 1 arc-minutes (0:01), you'll need to ensure that the mapset's region preserves that alignment after zooming or panning: g.region align=etopo1_bed_g -p (or oversample and set the region resolution to 1/2 arc-minutes (0:00:30), which will be four times as slow) ETOPO2 (DEM) See the ETOPO2 (2' global) article by M.H. Bowman in the GRASS Newsletter, 1:8-11, August 2004. ETOPO2v2 data download (take for example the ETOPO2v2g_f4_LSB.flt file) GTOPO30 (DEM) Data download - Import with r.in.gdal. Note: To avoid that the GTOPO30 data are read incorrectly, you can add a new line "PIXELTYPE SIGNEDINT" in the .HDR to force interpretation of the file as signed rather than unsigned integers. Then the .DEM file can be imported. Finally, e.g. the 'terrain' color table can be assigned to the imported map with r.colors. Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) Data download: Web and FTP - Import with r.in.gdal. See also related GDAL blog post Tiles: Import of GMTED2010 tiles in GRASS GIS: r.in.gdal 30N000E_20101117_gmted_mea075.tif out=gmted2010_30N000E_20101117 r.colors gmted2010_30N000E_20101117 color=elevation g.region rast=gmted2010_30N000E_20101117 r.relief input=gmted2010_30N000E_20101117 output=gmted2010_30N000E_20101117.shade r.shade shade=gmted2010_30N000E_20101117.shade color=gmted2010_30N000E_20101117 \ output=gmted2010_30N000E_20101117_shaded d.mon wx0 d.rast gmted2010_30N000E_20101117_shaded d.grid 1 color=red textcolor=red GMTED2010 example: Trento - Garda Lake - Verona area (Northern Italy) Full maps: # mean elevation global GMTED2010 map, 30 arc-sec wget http://edcintl.cr.usgs.gov/downloads/sciweb1/shared/topo/downloads/GMTED/Grid_ZipFiles/mn30_grd.zip unzip mn30_grd.zip Important: the GMTED2010 map exceeds the -180°..+180° range due to the GMTED2010 pixel geometry (PDF). Note that this cannot be handled in GRASS GIS < 7.4. Please update to GRASS GIS 7.4 or newer. GEBCO Bathymetric Chart The General Bathymetric Chart of the Oceans (original 1' release 2003, new 1' and 30" releases 2008) http://www.gebco.net/data_and_products/gridded_bathymetry_data/ http://www.bodc.ac.uk/data/online_delivery/gebco/ r.in.gdal can be used to import the GMT netCDF files directly, or if that doesn't work you can use GMT tools to convert to an old-style native GMT format and import that with r.in.bin. example: (GEBCO 2003 1' data) # convert to an old style GMT binary .grd using grdreformat $ grdreformat 3n24s47w14w.grd 3n24s47w14w_Native.grd=bs # then import into GRASS, GRASS> r.in.bin -h -s bytes=2 in=3n24s47w14w_Native.grd out=3n24s47w14w # and set some nice colors GRASS> r.colors 3n24s47w14w rules=- << EOF nv magenta 0% black -7740 0:0:168 0 84:176:248 0 40:124:0 522 68:148:24 1407 148:228:108 1929 232:228:108 2028 232:228:92 2550 228:160:32 2724 216:116:8 2730 grey 2754 grey 2760 252:252:252 2874 252:252:252 2883 192:192:192 2913 192:192:192 100% 252:252:252 EOF Global Multi-Resolution Topography (GMRT DEM) From Columbia University's Lamont-Doherty Earth Observatory (it is reported that this is what Google Maps uses for their global bathymetry) Global ~1 arc-second (~90 m) topography using multi-beam and satellite data in the oceans combined with SRTM on land. Full information at: http://www.marine-geo.org/portals/gmrt/ Accessible via GeoMapApp or Virtual Ocean software. Very convenient to download into GRASS via wget: export `g.region -g` wget "http://www.marine-geo.org/cgi-bin/getgridB?west=${w}&east=${e}&south=${s}&north=${n}&resolution=1" -O /tmp/test.grd r.in.gdal /tmp/test.grd output=GMRT -o rm /tmp/test.grd Note: Downloaded file contains no projection information, but is EPSG:4326 (WGS84 Geographic). The file size is limited, but lower resolution (resolution=2,4,8) data can be downloaded for larger areas. Smith and Sandwell DEM Merge info here from the Marine Science wiki page SRTM DEM Space Shuttle Radar Topography Mission - several SRTM Data Products are available: Original data - SRTM 3 V001 arc-seconds Non-Void Filled elevation data (US: 1 arc-second (approximately 30 meters); outside the US at 3 arc-seconds (approximately 90 meters)) SRTM V003 3 Arc-Second Global Void Filled elevation data, with voids filled using interpolation algorithms in conjunction with other sources of elevation data (US: 1 arc-second (approximately 30 meters); outside the US at 3 arc-seconds (approximately 90 meters)) SRTM V3 tiles at 3 arc seconds resolution from: http://e4ftl01.cr.usgs.gov/SRTM/SRTMGL3.003/2000.02.11/ - or simply use r.in.srtm.region SRTM V003 1 Arc-Second Global elevation data offer worldwide coverage of void filled data at a resolution of 1 arc-second (30 meters) and provide open distribution of this high-resolution global data set. EarthExplorer can be used to search, preview, and download Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global data. The collections are located under the Digital Elevation category. FTP download: http://e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/ - or simply use r.in.srtm.region Web-based 30-Meter SRTM Tile Downloader (select from map) Import: Using r.in.gdal or r.import or r.in.srtm or r.in.srtm.region see HOWTO import SRTM elevation data, focused on the SRTM 3 arc-seconds Non-Void Filled elevation data SRTM30plus data DEM SRTM30plus data consists of 33 files of global topography in the same format as the SRTM30 products distributed by the USGS EROS data center. The grid resolution is 30 seconds which is roughly one kilometer (1 km). Land data are based on the 1-km averages of topography derived from the USGS SRTM30 grided DEM data product created with data from the NASA Shuttle Radar Topography Mission. GTOPO30 data are used for high latitudes where SRTM data are not available. Ocean data are based on the Smith and Sandwell global 2-minute grid between latitudes +/- 72 degrees. Higher resolution grids have been added from the LDEO Ridge Multibeam Synthesis Project and the NGDC Coastal Relief Model. Arctic bathymetry is from the International Bathymetric Chart of the Oceans (IBCAO). All data are derived from public domain sources and these data are also in the public domain. GRASS 6 script r.in.srtm described in GRASSNews vol. 3 won't work with this dataset (as it was made for the original SRTM HGT files). But you can import SRTM30plus tiles into GRASS this way: r.in.bin -sb input=e020n40.Bathymetry.srtm output=e020n40_topex bytes=2 \ north=40 south=-10 east=60 west=20 r=6000 c=4800 r.colors e020n40_topex rules=etopo2 Source GRASS Users Mailing List http://lists.osgeo.org/pipermail/grass-user/2005-August/030063.html Getting as SRTM30plus tiles ftp://topex.ucsd.edu/pub/srtm30_plus/srtm30/data/ Getting as SRTM30plus huge file ftp://topex.ucsd.edu/pub/srtm30_plus/topo30/ SRTMPLUS WCS server http://svn.osgeo.org/gdal/trunk/autotest/gdrivers/data/srtmplus.wcs (read with r.external) SRTM Water Body Database SRTMSWBD V003 SRTM Water Body Database V003 Format documentation: https://lpdaac.usgs.gov/dataset_discovery/measures/measures_products_table/srtmswbd_v003 FTP raster data (30m res water bodies): http://e4ftl01.cr.usgs.gov/SRTM/SRTMSWBD.003/ Import into GRASS GIS 7 (lat-long location): r.in.bin -sb input=N00E108.raw output=N00E108_swbd bytes=1 north=0 south=-10 east=108 west=98 r=3601 c=3601 <<= DRAFT - TODO fix n,s,e,w - calculate from filename Soil data Harmonized World Soil Database (HWSD Database) Download: http://webarchive.iiasa.ac.at/Research/LUC/External-World-soil-database/HTML/ Spatial reference system: EPSG:4326 (LatLong WGS84) Import: grass70 -c EPSG:4326 ~/grassdata/hwsd # -e: expand location to dataset; -o: override (missing) projection in input dataset: r.in.gdal input=hwsd.bil output=hwSoil -e -o g.region raster=hwSoil -p r.category hwSoil The data is distributed with an MSAccess .mdb which contains additional data for each of the categories in the raster file. Opening the file in access, the data is found in the query "HWSD_Q". Save this query in .csv format (with a name like "HWSD_Q.csv") so that it may then be imported into GRASS. After that, it is necessary to replace the commas with dots (find & replace) in the .csv file. Before you can import it, you also need a file "HWSD_Q.csvt", which contains a single line listing the type for each column in the database: "Integer","String","Integer","Integer","Integer","String","Integer","Integer","Real","Integer","String","Integer","String","Integer","String","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Integer","Real","Real","Real","Real","Real","Real","Real","Real","Real","Real","Real","Real","Integer","Integer","Integer","Integer","Integer","Real","Real","Real","Real","Real","Real","Real","Real","Real","Real","Real","Real" With both the .csv and the .csvt file in the same directory, you can then import them into GRASS: db.in.ogr input=~/grassdata/hwsd/HWSD_Q.csv output=hwsdData The data cannot be connected directly to the raster, it must be converted to a vector first: g.region raster=hwSoil r.to.vect -v input=hwSoil output=hwSoil feature=area v.db.droptable hwSoil db.droptable -f hwSoil # delete the table completely Note that the table includes multiple rows for each polygon, corresponding to the dominant and various numbers of subdominant soils. To select only the dominant soil layer: db.select table=hwsdData sql='select * from hwsdData where SEQ = 1' \ output=domSoil.csv separator=, This saves a copy of the table that contains only the dominant soil type for each polygon as domSoil.csv. This needs to be reloaded into the GRASS database. Since it has the same columns as HWSD_Q.csv, we can use the labels for that file: cp HWSD_Q.csvt domSoil.csvt Then we can load domSoil.csv: db.in.ogr \ input=~/grassdata/downloads/harmonized_world_soil_database/domSoil.csv \ output=domSoil Now at last we can connect the database to the vector file: v.db.connect -o map=hwsd table=domSoil driver=sqlite key=MU_GLOBAL To create a new raster map taking the values from the table: g.region raster=hwSoil ## make sure we get the whole map v.to.rast in=hwSoil out=T_SAND col=T_SAND SoilGrids.org 250m soil taxonomy map SoilGrids is a system for automated soil mapping based on global soil profile and environmental covariate data. SoilGrids represents a collection of updatable soil property and class maps of the world at 1 km and 250 m spatial resolution produced using automated soil mapping based on machine learning algorithms. It aims at becoming OpenStreetMap and/or OpenWeatherMap for soil data. SoilGrids predictions are updated on a regular basis (at least every few months). For more details about the SoilGrids system, please refer to the SoilGrids project site: https://www.soilgrids.org/#/?layer=geonode:taxnwrb_250m URL=ftp://ftp.soilgrids.org/data/recent/TAXNWRB_250m_ll.tif # the Soilgrids GeoTIFF data suffer from a resolution precision problem since they were produced with SAGA: # resolution is stored as 0.002083333000000 while it should be 0.002083333333333, hence the geometry is not fully correct # this likely originates from Soilgrids being processed in SAGA which cuts decimals after the 10th decimal place, hence comes with a precision problem export NAME=`basename $URL .tif` wget $URL OLD OLD OLD start -- # see below for the better way how to fix SoilGrids data gdal_translate --config GDAL_CACHEMAX 2000 -a_ullr $COORDS -co "COMPRESS=DEFLATE" $NAME.tif ${NAME}_fixed.tif gdalinfo ${NAME}_fixed.tif grass72 -c ${NAME}_fixed.tif ~/grassdata/latlong --exec r.import input=${NAME}_fixed.tif output=${NAME} -- OLD OLD OLD end New fix & import method: Starting with GRASS GIS 7.4.x, there is new flag in r.in.gdal to auto-adjust such small resolution precision issues: -a - Auto-adjustment for lat/lon. Attempt to fix small precision errors in resolution and extents. r.in.gdal -a input=TAXNWRB_250m_ll.tif output=TAXNWRB_250m_ll Landcover data ESA Globcover dataset Download: http://due.esrin.esa.int/page_globcover.php Or via command line: wget http://due.esrin.esa.int/files/Globcover2009_V2.3_Global_.zip unzip Globcover2009_V2.3_Global_.zip # rm -f Globcover2009_V2.3_Global_.zip Note, also a coloured version of the map in GeoTIFF format is available at: http://due.esrin.esa.int/files/GLOBCOVER_L4_200901_200912_V2.3.color.tif Unfortunately the Globcover map exceeds the -180°..+180° range etc, indicating a shift of the map (see also this assessment by DWD😞 gdalinfo GLOBCOVER_L4_200901_200912_V2.3.tif Driver: GTiff/GeoTIFF Files: GLOBCOVER_L4_200901_200912_V2.3.tif Size is 129600, 55800 Coordinate System is: GEOGCS["WGS 84", ... Origin = (-180.001388888888897,90.001388888888883) ... Corner Coordinates: Upper Left (-180.0013889, 90.0013889) (180d 0' 5.00"W, 90d 0' 5.00"N) Lower Left (-180.0013889, -64.9986111) (180d 0' 5.00"W, 64d59'55.00"S) Upper Right ( 179.9986111, 90.0013889) (179d59'55.00"E, 90d 0' 5.00"N) Lower Right ( 179.9986111, -64.9986111) (179d59'55.00"E, 64d59'55.00"S) Center ( -0.0013889, 12.5013889) ( 0d 0' 5.00"W, 12d30' 5.00"N) ... How to fix this? Option 1: You can use the -l flag of r.in.gdal to constrain the map coordinates to legal values (ref. But the resulting pixels will no longer have the original resolution. We will not do that. Option 2: Shift the Globcover map slightly into the right position using gdal_translate: # coords are shifted, fix raster map # -a_ullr Assign/override the georeferenced bounds of the output file # use larger cache and compress result gdal_translate --config GDAL_CACHEMAX 1200 -a_ullr -180 90 180 -65 \ -co "COMPRESS=LZW" GLOBCOVER_L4_200901_200912_V2.3.tif GLOBCOVER_L4_200901_200912_V2.3_fixed.tif # result: gdalinfo GLOBCOVER_L4_200901_200912_V2.3_fixed.tif ... Origin = (-180.000000000000000,90.000000000000000) Pixel Size = (0.002777777777778,-0.002777777777778) ... Corner Coordinates: Upper Left (-180.0000000, 90.0000000) (180d 0' 0.00"W, 90d 0' 0.00"N) Lower Left (-180.0000000, -65.0000000) (180d 0' 0.00"W, 65d 0' 0.00"S) Upper Right ( 180.0000000, 90.0000000) (180d 0' 0.00"E, 90d 0' 0.00"N) Lower Right ( 180.0000000, -65.0000000) (180d 0' 0.00"E, 65d 0' 0.00"S) Center ( 0.0000000, 12.5000000) ( 0d 0' 0.01"E, 12d30' 0.00"N) Voilà! Now we can import the map into GRASS GIS: r.in.gdal input=GLOBCOVER_L4_200901_200912_V2.3_fixed.tif output=esa_globcover2009 Legend conversion: The ZIP file contains a XLS table describing the classes and the RGB colors. Using ogr2ogr can directly convert XLS --> CSV: ogr2ogr -f CSV Globcover2009_Legend.csv Globcover2009_Legend.xls Applying the legend: # suppress table header and only consider category value and label, apply on the fly: cat Globcover2009_Legend.csv | grep -v '^Value' | cut -d',' -f1-2 | r.category esa_globcover2009 separator=comma rules=- # verify (0E, 0N is the Atlantic Ocean) r.what esa_globcover2009 coor=0,0 -f 0|0||210|Water bodies Global Forest Change http://earthenginepartners.appspot.com/science-2013-global-forest Download info on: http://earthenginepartners.appspot.com/science-2013-global-forest/download_v1.3.html Imagery AVHRR see the AVHRR wiki page Blue Marble imagery NASA's Blue Marble is a 500m-8 degree per-cell world wide visual image of the Earth from space, with the clouds removed. see the Blue Marble wiki page EO-1 imagery (Earth Observing-1) "Advanced Land Imager (ALI) provides image data from ten spectral bands (band designations). The instrument operates in a pushbroom fashion, with a spatial resolution of 30 meters for the multispectral bands and 10 meters for the panchromatic band." -- http://eros.usgs.gov/products/satellite/eo1.php On-board Atmospheric Corrections Global Land Cover Characteristics USGS et al. generated dataset at 1km resolution. Provides global landcover characteristics. see the Global Land Cover Characteristics wiki page LANDSAT imagery Since October 1, 2008 all Landsat 7 ETM+ scenes held in the USGS EROS archive are available for download at no charge. Download via the Glovis online search tool (req. Java) Download via the USGS's EarthExplorer interface Import Modules r.in.gdal - Main import tool for complete multiband scenes r.in.wms - Download data covering current map region via WMS server r.in.onearth - WMS frontend for NASA's OnEarth Global Landsat Mosaic Color balancing modules i.landsat.rgb (GRASS 6.x) | i.colors.enhance (GRASS 7.x) - Color balancing/enhancement tool See also Processing tips can be found on the LANDSAT wiki page ESA Sentinel imagery All Sentinel 1 and 2 data is available for download from the Open Access Hub via the online interactive interface via the API For pre-processing, different tools are available at http://step.esa.int/main/ Scientific Toolbox Exploitation Platform https://github.com/Fernerkundung/awesome-sentinel ("Awesome Sentinel" - list of tools) Miscellaneous Data sources Some datasource links: http://www.ruf.rice.edu/~ben/gmt.html Geotorrent.org Import Modules The r.in.gdal modules may be used to import data of many formats, including GMT netCDF The r.in.bin module may be used to import raw binary files MODIS imagery see the MODIS wiki page Natural Earth imagery Natural Earth II: World environment map in natural color. GeoTIFF (use the r.in.gdal module) see also 1:10 million, 1:50 million and 1:110million scale maps from http://www.naturalearthdata.com/ Orthoimagery Sources of free orthoimagery Pathfinder AVHRR SST imagery see the Pathfinder AVHRR SST wiki page QuickBird imagery See the QuickBird wiki page SeaWiFS imagery see the SeaWiFS wiki page SPOT Vegetation imagery SPOT Vegetation (1km) global: NDVI data sets SPOT Vegetation (1km, global) NDVI data set server for import, see i.in.spotvgt True Marble imagery True Marble: 250m world wide visual image of the Earth from space, with the clouds removed. GeoTIFF (use the r.in.gdal module) Climatic data OGC WCS - Albedo example TODO: update this example e.g. to http://demo.mapserver.org/cgi-bin/wcs?SERVICE=wcs&VERSION=1.0.0&REQUEST=GetCapabilities GRASS imports OGC Web Coverage Service data. Example server (please suggest a better one!) <WCS_GDAL> <ServiceURL>http://laits.gmu.edu/cgi-bin/NWGISS/NWGISS?</ServiceURL> <CoverageName>AUTUMN.hdf</CoverageName> <Timeout>90</Timeout> <Resample>nearest</Resample> </WCS_GDAL> Save this as albedo.xml. Import into a LatLong WGS84 location: r.in.gdal albedo.xml out=albedo Unfortunately this server sends out the map shifted by 0.5 pixel. This requires a fix to the map boundary coordinates: r.region albedo n=90 s=-90 w=-180 e=180 Now apply color table and look at the map: r.colors albedo color=byr d.mon x0 d.rast albedo SNODAS maps Snow Data Assimilation System data that support hydrological modeling and analysis. First download the data, and untar them (once for each month, and once for each day), and you should get pairs of “.dat” and “.Hdr” files. The data files are stored in flat 16-bit binary format, so assuming that “snowdas_in.dat” is the name of the input file, at the GRASS prompt: r.in.bin -bs bytes=2 rows=3351 cols=6935 north=52.874583333332339 \ south=24.949583333333454 east=-66.942083333334011 west=-124.733749999998366 \ anull=-9999 input=snowdas_input.dat output=snowdas CHELSA climate maps CHELSA – Climatologies at high resolution for the earth’s land surface areas is a high resolution (30 arc sec) climate data set for the earth land surface areas currently under development, see http://chelsa-climate.org/ Version 1.1 has some coordinate issues originating from SAGA being used (coordinate precision issue), see http://chelsa-climate.org/known-issues/ # WARNING: dirty hack - Better wait for the new release V1.2 of CHELSA! for i in `ls /scratch/chelsa_climate/*.zip` ; do unzip $i NAME=`basename $i .zip` gdal_translate --config GDAL_CACHEMAX 2000 -a_ullr -180 84 180 -90 -co "COMPRESS=DEFLATE" $NAME.tif ${NAME}_fixed.tif rm -f $NAME.tif done WorldClim maps WorldClim is a set of global climate layers (climate grids) with a spatial resolution of a square kilometer. Besides long-term average climate layers (representing the period 1950 - 2000) it also includes projections for future conditions based on downscaled global climate model (GCM) data from CMIP5 (IPPC Fifth Assessment) and projections of past conditions (downscaled global climate model output). Load into a Lat/Lon WGS84 location (EPSG:4326) The data set is provided in two formats: BIL and ESRI Grd. Import with r.in.bin or r.in.gdal. Version 1.4 has some coordinate issues: a) BIL: binary format is 2 byte integer. Multiply by 10 using r.mapcalc to convert units. See http://www.worldclim.org/format.htm for more information and the MODIS help page for example of converting raw to data units. Note that the file header is missing a line. To fix: # fix WorldClim's BIL; tmean example for i in $(seq 1 12); do echo “PIXELTYPE SIGNEDINT” >>tmean$i.hdr; done b) ESRI grd files: Note that the WorldClim ESRI grd files suffer from a quality issue of coordinate precision. See here for a solution. # fix WorldClim's ESRI Grd; tmean example export GDAL_CACHEMAX=2000 mkdir -p ~/tmp/ # fix broken WorldClim files, see https://lists.osgeo.org/pipermail/grass-user/2011-January/059358.html # note: 60S, not 90S for i in $(seq 1 12); do gdal_translate -a_ullr -180 90 180 -60 tmean_$i $HOME/tmp/tmean_${i}_fixed.tif; done # # import for i in $(seq 1 12) ; do r.in.gdal input=$HOME/tmp/tmean_${i}_fixed.tif out=tmp --o ; g.region raster=tmp -p ; r.mapcalc "tmean_${i} = 0.1 * tmp" --o ; r.colors tmean_${i} color=celsius ; done # # clean up g.remove raster name=tmp -f rm -f ~/tmp/tmean_?_fixed.tif ; rm -f ~/tmp/tmean_??_fixed.tif Africlim maps Africlim provides four baseline data sets for current climate, including: CRU CL 2.0 WorldClim v1.4 TAMSAT TARCAT v2.0 (rainfall only) CHIRPS v1.8 (rainfall only). It furthermore provides data sets with projections of future climates based on combinations of ten general circulation models (GCMs), downscaled using five regional climate models (RCMs) and the four above mentioned contemporary baselines, under two representative concentration pathways of the IPCC-AR5 (RCP4.5 and RCP8.5). The data layers are available as GeoTIF files at spatial resolutions of 10', 5', 2.5', 1' and 30". Population maps WorldPop http://www.worldpop.org.uk/ Gridded Population of the World http://sedac.ciesin.columbia.edu/gpw/global.jsp Import with r.in.gdal, assign population color table with r.colors Topographic maps Soviet topographic maps Soviet topographic maps as geocoded GeoTIFFs Vector data Natural Earth http://www.naturalearthdata.com/ data scaled for 1:10 million, 1:50 million and 1:110million CDC Geographic Boundary and Public Health Maps http://www.cdc.gov/epiinfo/maps.htm Global Administrative Areas GADM is a database of the location of the world's administrative areas (boundaries) available in shapefiles. http://gadm.org (extracted by country here) World Borders Dataset including ISO 3166-1 Country codes available in shapefiles. http://thematicmapping.org/downloads/world_borders.php Free GIS data from Mapping Hacks http://mappinghacks.com/data/ GSHHS World Coastline GSHHS is a high resolution shoreline dataset. It is derived from data in the public domain and licensed as GPL. The shorelines are constructed entirely from hierarchically arranged closed polygons. It is closely linked to the GMTproject. Availability Download the original data set from http://www.soest.hawaii.edu/pwessel/gshhg/index.html. Also available at http://www.ngdc.noaa.gov/mgg/shorelines/data/gshhg/latest/. The data set, or parts from it, can be extracted from NOAA's shoreline extractor. For GRASS 6 you can download 1:250,000 shoreline data from NOAA's site in Mapgen format, which can be imported with the v.in.mapgen module. ESRI Shapefiles of the latest version are available at http://www.ngdc.noaa.gov/mgg/shorelines/data/gshhg/latest/. The old 1.6 version is available at ftp://ftp.ihg.uni-duisburg.de/GIS/GISData/GSHHS/. Import Import with the GRASS6 add-on module v.in.gshhs OpenStreetMap See the OpenStreetMap wiki page. Administrative boundaries from OpenStreetMap For a convenient download in GeoJSON and SHAPE, see https://wambachers-osm.website/boundaries/ (using the amost invisible triangle, you can pop out details of a country down to admin level 😎 SALB Second Administrative Level Boundaries: "The SALB dataset is a global digital dataset consisting of digital maps and codes that can be downloaded on a country by country basis." http://www.who.int/whosis/database/gis/salb/salb_home.htm VMap0 1:1 million vector data. Formerly known as Digital Chart of the World see the two articles in GRASS Newsletter vol. 3 (June 2005) Check the Wikipedia page on VMAP, see the links at the bottom of that article to shapefile versions of VMAP0 and VMAP1. Those look like the versions that were, several years ago, on a NIMA (predecessor to NGA, and successor to the Defense Mapping Agency that managed the Digital Chart of the World and VMAP project) Website. Many GRASS users may prefer the shapefiles to the original Vector Product Format data. VMap0 data in ESRI shape format See also Global datasets list by T. Hengl (with dataset download) http://freegisdata.rtwilson.com/ The FreeGIS.org database: http://www.freegis.org/database/ http://finder.geocommons.com/ http://wiki.openstreetmap.org/wiki/Potential_Datasources http://www.geonames.org/data-sources.html Open Knowledge Foundation link collection Open Weather Map free weather data and forecast API suitable for any cartographic services like web and smartphones applications. Ideology is inspired by OpenStreetMap and Wikipedia that make information free and available for everybody. Metadata Catalogues Catalog Service for the Web (CSW) is an OGC standard for offering access to catalogues of geospatial information over the Internet (HTTP). CSW allow for discovering, browsing, and querying metadata about data, services, and similar resources. A list of Metadata Catalogues / CSW services from member states of the European Union can be found here: http://inspire-geoportal.ec.europa.eu/INSPIRERegistry/ And here: http://inspire-geoportal.ec.europa.eu/discovery/ one can search European Metadata Catalogues online. European datasets European datasets Global Risk Data Platform European Commission Opendata Portal: 5800+ datasets E-OBS This is the download page for the ENSEMBLES daily gridded observational dataset for precipitation, temperature and sea level pressure in Europe MARS @ JRC Temperature, vapour pressure, rainfall, relative humidity, cloud cover, solar radiation, wind speed. EFAS @ JRC is a High resolution pan-European dataset for hydrologic modelling. JRC Data Portal In this catalogue, you can find an inventory of data that produced by the JRC in accordance with the JRC data policy. The content is continuously updated and shall not be seen as a complete inventory of JRC data. Currently, the inventory describes only a small subset of JRC data. National datasets Australian Spatial Data Directory Australian Ecological Knowledge and Observation System Italian Geodata collection New Zealand data from Koordinates.com United States from NOAA/USGSs data portal (FIXME: link?) Greek Public Geodata (in Greek) Various datasets worldwide GEOSPATIAL DATA REPORT: Finding and Using GIS Data Edenext data portal: Land Cover, Transport networks, Elevation, Orthoimagery, Human health and safety, Species Distribution, Atmospheric Conditions and Meteorological Geographical Features, Training Program Presentations and Data, Utility and governmental services, Hydrography, Soil, Bio geographical regions, Population distribution and Demographics Global Data Explorer USGS: ASTER, SRTM, GTOPO etc landcover GRIPWEB’s Data & Informational Portal: hazard & risk SEDAC: Agriculture, Climate, Conservation, Framework Data, Governance, Hazards, Health, Infrastructure, Land Use, Marine and Coastal, Population, Poverty, Remote Sensing, Sustainability, Urban, Water Prevention Web: hazard & risk UNdata: UN database UNDP home page global climatic data cosortium for spatial information CGIAR-CSI GeoSpatial Toolkits Links to over 300 sites providing freely available geographic datasets Free Spatial Data Global Land Cover Characteristics Data Base Version 2.0 A Portal to High-Resolution Topography Data and Tools HadGHCND is a gridded daily temperature dataset based upon near-surface maximum (TX) and minimum (TN) temperature observations. WMS servers See WMS page River discharge data Global Runoff Data Centre Global River Discharge Database CSDMS
  16. 2 points
    What is a social network, anyway? To a consumer, a social network might be a place to share memes, cat photos and selfies. But to a business, a social network is a place to bolster and defend branding, share product information, interact with customers and participate in relevant conversations with the world. Businesses have websites. So why do they need to be on social? Because social is where the customers are, and where customers go to praise or complain about companies to each other — or to find out information about products and services. Which raises the question: Which social networks should businesses and enterprises invest their time and money in? The trouble with Twitter and Facebook Twitter isn’t an ideal place for businesses to engage with customers and others. Journalists and celebrities will tell you Twitter is the only social network that matters. But that’s because the site is mainly used heavily only by journalists and celebrities. (And haters, spammers, propagandists and bots.) In fact, Twitter reported the loss of some 9 million users during the third quarter. (Which isn’t really true; they weren’t “users,” but mostly fake and bot accounts.) Twitter now has 326 million active users worldwide. It’s also something of a bad neighborhood for business. Unlike Facebook, porn and other unsavory content is allowed on Twitter. And the site is a notorious magnet for haters, racists, misogynists, terrorists, trolls and spammers. Worst of all, and unlike Facebook and other social sites, you can’t delete the comments that appear under your own posts. If you post something, and the conversation is hijacked by malicious users seeking to ruin your reputation, there’s nothing you can do about it. As a business on Twitter, you’re just a target. Facebook is even less appealing for businesses. In recent days and, it seems, increasingly over time, Facebook just keeps getting horrible press, with the company blamed for manipulation, tracking, abuse, dishonesty and incompetence. The most recent PR black eye came in the form of a scathing New York Times investigative piece alleging that Facebook officers first ignored, then concealed, the truth about Russia’s disinformation campaign on Facebook before the 2016 U.S. presidential election. Facebook Chief Operating Officer Sheryl Sandberg this week disputed the report and defended her actions in a Facebook post. The article also alleged that Facebook hired a PR firm called Definers Public Affairs to spread misinformation on behalf of Facebook. A company blog post this week denied Facebook tasked Definers to write fake articles and said Facebook fired the firm this week. That blog post also references an anti-Facebook organization called Freedom from Facebook, which is pushing the Federal Trade Commission to investigate a recent breach of 30 million user accounts and also for the breakup of the company, arguing that “the FTC should spin off Instagram, WhatsApp, and Messenger into competing networks.” Summarizing the broad complaint of Facebook critics generally, the organization’s web page says: “Facebook and Mark Zuckerberg have amassed a scary amount of power. Facebook unilaterally decides the news that billions of people around the world see every day. It buys up or bankrupts potential competitors to protect its monopoly, killing innovation and choice. It tracks us almost everywhere we go on the web and, through our smartphones, even where we go in the real world. It uses this intimate data hoard to figure out how to addict us and our children to its services. And then Facebook serves up everything about us to its true customers — virtually anyone willing to pay for the ability to convince us to buy, do, or believe something.” As a result of its declining public esteem, advertisers are starting to openly question whether Facebook has lost its “compass,” according to a piece published this week by The New York Times. So if Twitter and Facebook are bad for business, what’s the alternative? Here comes the ‘good’ social network for business Businesses need social networks to bolster and defend branding, share product information, interact with customers and participate in relevant conversations with the world. But which one? Increasingly, the answer is: Google Maps. Google Maps reaches a huge number of customers — more than is possible with the social networks. Google Maps has more than a billion users. And while Facebook has more active users, that company will never let you reach them all because of its algorithmic control of who sees what. In fact, an update earlier this year had a devastating effect on organic reach on Facebook, and this was combined with a huge fee increase for advertising. Facebook organic reach is now down to 1.2%. That means only 1.2% of your followers see the posts they signed up to see. Google Maps is becoming an invaluable tool for marketing — and, increasingly, a better tool than social networks. Whenever consumers want to find a storefront business, they increasingly do so using Google Search or Google Maps. Maps content is now automatically appearing in search engine results, so a strong showing on Google Maps gives you both Maps and Search. And Maps results are favored in results when the user is physically near. Google Maps is not supposed to be a social network. But after recent changes, Google Maps now does most of the things businesses need from social networks. Google this week announced a new feature that enables the public to message businesses directly through the Maps app. The feature will appear as a new “messages” button, which will be rolled out gradually to iOS and Android app users. Customers can use the messaging feature to order products, ask questions about whether something is in stock or ask other customer service-related questions. When users have a complaint, it can be handled directly and privately. Because the interaction isn’t public, spammers and haters won’t benefit from trolling. Google is adding another social feature to Google Maps: the ability for customers and fans to “follow” business locations, which enables companies to update customers and prospective customers with offers, deals, events and other information. The feature is appearing as a new “Follow” button on businesses’ listings in Maps. Unlike Facebook, which delivers your updates to only a tiny fraction of your followers — and a slightly larger fraction if you pay — Google Maps will deliver all updates to all followers front and center of the “For You” tab whenever they use Google Maps. Imagine that! A “social network” that delivers all your content to all the consumers who follow you! Businesses wishing to participate in either of these social features need to use Google’s “My Business” verification system and the app that goes with it, and of course also provide the back-end staffing and resource allocation to keep the listing responsive and up to date. Google Maps even now facilitates social sharing — but only the sharing of information about businesses. A new group-planning feature enables users to create lists of businesses they’d like to visit, then share those lists with friends. They can then talk through the Maps app to decide which businesses they’d like to visit together. Google Maps also offers opportunities no other social network does. For example, businesses can have the interior of their locations featured via the StreetView “Indoor Maps” program. This feature will become increasingly valuable as StreetView becomes a virtual reality experience. Google Maps is not a social network. But with recent updates, Google Maps now gives businesses most of the beautiful benefits of social networks — without any of the ugly downsides. source: https://www.computerworld.com/article/3321932/social-media/google-maps-is-the-new-social-network.html
  17. 2 points
    Please see the articles https://pdfs.semanticscholar.org/430a/bee7f2653402e26902a1c6f8ad8934b03589.pdf , for ArcGIS http://www.ccis2k.org/iajit/PDF/vol.10%2Cno.3/10-4234.pdf https://gis.e-education.psu.edu/sites/default/files/capstone/Ward_596B_20170401.pdf
  18. 2 points
    Please find the article in attached. 1)Wind farms suitability location using geographical information system (GIS), based on multi-criteria decision making (MCDM) methods: The case of continental Ecuador (4.3 MB) https://mega.nz/#!jVtWTYLa Pass: !pBG9Fb64Dzx4zI5sRCy89_7WMOTBjBSBCmwiNa2WBhY 2) GIS-BASED METHOD FOR WIND FARM LOCATION MULTI-CRITERIA ANALYSIS (777 KB) https://mega.nz/#!OM90VabQ Pass: !rfcMPT9F8WF2U-IPDh5BS972Sjq4Rtbm2PhVXevg2MA 3) GIS based approach for suitable Site Selection of Wind Farms in North-East India. (442 KB) https://mega.nz/#!KMsUWAhJ Pass: !uUmegHxayq0XNrseEZERwFGdr_dCU1CYjrhofsEYr0s This volume is a comprehensive guide to the use of geographic information systems (GIS) for the spatial analysis of supply and demand for energy in the global and local scale. It gathers the latest research and techniques in GIS for spatial and temporal analysis of energy systems, mapping of energy from fossil fuels, optimization of renewable energy sources, optimized deployment of existing power sources, and assessment of environmental impact of all of the above. Author Lubos Matejicek covers GIS for assessment a wide variety of energy sources, including fossil fuels, hydropower, wind power, solar energy, biomass energy, and nuclear power as well as the use of batteries and accumulators. The author also utilizes case studies to illustrate advanced techniques such as multicriteria analysis, environmental modeling for prediction of energy consumption, and the use of mobile computing and multimedia tools. Assessment of Energy Sources Using GIS-Springer (2017).pdf (34.4 MB) https://mega.nz/#!idk0zSJQ Pass: !86sM22doO6CVhwn1gbUnGiwa_dZn1DVfAX6zSD09Ur4
  19. 1 point
    For those who were looking for a style editor like Mapbox for Esri basemaps, here is one. This is an interactive basemap style WYSIWYG editor readily usable with ArcGIS Developer account. How it works Start by selecting an existing Esri vector basemap (e.g. World Street Map or Light Gray Canvas) and then just begin customizing the layer colors and labels from there. You can edit everything from fill and text symbols to fonts, halos, patterns, transparency, and zoom level visibility. When you are finished, the styles are saved as an item in ArcGIS Online with a unique ID. The Map Viewer or any custom application can then reference the item to display the basemap. Design Tools The editor makes styling easy by allowing you to style many layers at once or by allowing you to search for individual layers of interest. Here are some of options available: Quick Edit – select all layers and style them at once Edit by Color – select and replace a color for one or more layers Edit Layer Styles – search for one or more layers to style Layer Editor – click the map or the layer list to perform detailed editing on a layer Quick edits Layer editor Try it! To start customizing a basemap sign into the ArcGIS for Developers website and click “New Basemap Style”. There are also new ArcGIS DevLabs for styling a vector tile basemap and displaying a styled vector basemap in your application. For more inspiration visit this showcase of some custom styles we have created. ArcGIS Vector Tile Style Editor
  20. 1 point
    I am enthusiastic how I can create extension same as GIS.XL for LibreOffice instead of Microsoft Excel!? I am going to know where should I start? http://www.gisxl.com/Features The GIS.XL add-in provides features and functions for work with spatial data directly inside the Excel environment. Add-in includes a standard interface, familiar from other GIS programs - Map and Legend. Combine Excel (tabular data) and spatial (map) data in layers.
  21. 1 point
    if they do pull it off then they would be the third Titan in the mobile market, something Microsoft mobile failed miserably
  22. 1 point
    This is the real challenges for Huawei how to convince their user to use this new operating system. But nothing is impossible...
  23. 1 point
    Free Urban Analysis Toolbox Contains ARCPY tools for Urban Planners. Now its Developing and of-coarse FREE. At the moment you can download it at this ADDRESS. Remember before use check out for latest version. 2 tools are available: Land Use Entropy Index Calculator & Modified Huff Gravity Model (added custom Distance Decay Functions). Hope you enjoy Developer is [email protected] which is unknown
  24. 1 point
    This year at WWDC 2019, Apple unveiled a cheese grater and called it the new Mac Pro. But, to see the 2019 Mac Pro once is enough to remember it for a long time. Specs - According to Apple website, you can spend as much as $35,000+ in it !! 🤯😬 Source
  25. 1 point
    The state of Alaska is beautiful and wild — no wonder it is called the “Last Frontier”. The land has more than 130 volcanoes that pose a grave threat to the residents. A joint project by National Oceanic and Atmospheric Administration (NOAA) and NASA (National Aeronautics and Space Administration) has given scientists and forecasters a platform to protect people from volcanic ash. This is one of the many case studies highlighted by the space agency on its new website, SpaceforUS, which intends to highlight how NASA has used its earth observation data to better the living conditions of people in all 50 states of America. NASA, for the past six decades, the agency has used interpretations from the space to understand the “Blue Planet” better. By using its fleet of space technologies, it has improved the lives of the people of America. Some 25,000 flights flights fly over Alaskan volcanoes which can be even more hazardous during eruptions and when volcanoes discharge volcanic ash. An joint initiative by NOAA and NASA, the project tracks clouds and guides regulators and airlines. Through this new and interactive website, SpaceforUS, NASA wishes to highlight the innumerable ways in which its earth observations has helped administrators take informed decisions in the areas of public health, disaster response and environmental protection. The site, also being termed NASA’s communication project, explores the stories behind the innovative technology, ground-breaking insights and extraordinary collaborations. Single platform SpaceforUS has a total of 56 stories that illustrate NASA’s science and the impact it has managed to have in all 50 states, including the District of Columbia, Puerto Rico and regions along the Atlantic, Pacific, Gulf of Mexico and the Great Lakes. On the website, readers can browse stories on animals, disasters, energy, health, land and water either by state or by topics. The website showcases the power of earth observation through state-by-state project examples — from guiding pilots around hazardous volcanic ash plumes over Alaska to first responders to devastating hurricanes in North Carolina. During Hurricane Rita, NASA created high quality satellite images that identified power averages guiding first responders for life saving aid. On SpaceforUS, each click brings to the readers a story about the different ways in which people are using NASA data in their day-to-day lives. Open Data To all those seeking solutions to imperative global issues, NASA also provides free and open earth observation data on issues related to changing freshwater availability, food security and human health. NASA’s Applied Sciences platform provides financial assistance to those projects that facilitate innovative uses of NASA Earth science data to ensure that well-informed decisions are made to not only strengthen America’s economy, but to also improve the quality of life globally. In the state of Arizona, NASA Earth observations have already identified the hottest areas around the Phoenix metropolitan area making civic planning and environmental monitoring a lot easier. website : https://www.nasa.gov/SpaceforUS/
  26. 1 point
    Hello everyone, I'm Halid, from Bosnia and Herzegovina, geodetic engineer. I don't have much experience in GIS area, but I'll give my best to contribute, and hope I'll get info that I need also :))
  27. 1 point
    Check my latest fixes (updated 15th April 2019) - http://www.mediafire.com/file/61joa3j8u4e51ii/list.txt
  28. 1 point
    The picture like above are not actual picture of Black Hole (it is a wallpaper 😁 ). Early Wednesday (April 10, 2019) morning, a huge collaboration of scientists are expected to release the first images of the event horizon of a black hole, constructed from data gathered by observatories all over the globe. Combined, the telescopes created a virtual telescope as big as the Earth itself that’s powerful enough to capture enough data from the supermassive black hole at the center of our galaxy. Tomorrow, we may finally see all of that data pieced together. Black holes, by their nature, are impossible to see with the naked eye since they are so dense that no light can escape them. Instead, any images that will be released will be the silhouette of a black hole, an outline against all of the super bright, hot gas that is thought to surround these weird celestial objects. It will be as close as we can get to a picture of a black hole’s infamous “event horizon,” the boundary of a black hole where the gravitational pull is so great that there is no escape. The Event Horizon Telescope actually observed two black holes during one week in April 2017: Sagittarius A*, the supermassive black hole at the center of our Milky Way galaxy, and M87, which is thought to be in the center of a nearby galaxy called Virgo. Both of these objects are thought to be incredibly dense. Sagittarius A*, or SgrA*, is thought to be 4 million times more massive than our Sun and 30 times larger than the star. But because it is so far away — a distance of about 26,000 light-years — the black hole appears to telescopes on Earth as though it is about the size of small ball on the surface of the Moon, according to the collaboration. To focus in on the massive but distant objects, the Event Horizon team employed telescopes in Chile, Hawaii, Arizona, the South Pole, and other locations around the globe. Each telescope measured the radiation coming from the large swaths of gas and dust that are thought to surround black holes. These clouds of gas heat up to billions of degrees, and because the material is so hot, they emit lots of radiation, which the team could then observe from Earth. All of that data was then combined in a supercomputer to make an image that looked as if it came from a single, giant telescope. “This is a picture you would have seen if you had eyes as big as the Earth and were observing in radio,” Psaltis says. Getting all of this data isn’t easy. In fact, the reason it’s taken so long to mount a project of this scale is that the telescopes gather so much information — about one petabyte, or a million gigabytes — of data each night of observing. It’s the largest amount of recording of any other experiment in physics or astronomy, says Psaltis. The team had to wait for hard drive technology to evolve so that it could hold the sheer amount of data that the team was collecting. “Five years ago, you couldn’t buy enough hard drives to have a terabyte of data on a telescope,” says Psaltis. What that enormous amount of data shows could change our understanding of black holes. These objects are so dense that it’s thought that they actually leave an imprint on the surrounding space-time, warping gravity and creating strange effects on their surroundings, which scientists are still trying to understand. A picture of a black hole could tell us more about these odd happenings at the event horizon. - the Verge UPDATE - This is an actual Black Hole ! At the announcement at Washington’s National Press Club. “We now have visual evidence. We know that a black hole sits at the center of the M87 galaxy.” How they took the image First-ever picture of a black hole unveiled
  29. 1 point
    The U.S. Air Force’s second new GPS III satellite, bringing higher-power, more accurate and harder-to-jam signals to the GPS constellation, has arrived in Florida for launch. On March 18, Lockheed Martin shipped the Air Force’s second GPS III space vehicle (GPS III SV02) to Cape Canaveral for an expected July launch. Designed and built at Lockheed Martin’s GPS III Processing Facility near Denver, the satellite traveled from Buckley Air Force Base, Colorado, to the Cape on a massive Air Force C-17 aircraft. The Air Force nicknamed the GPS III SV02 “Magellan” after Portuguese explorer Ferdinand Magellan. GPS III is the most powerful and resilient GPS satellite ever put on orbit. Developed with an entirely new design, for U.S. and allied forces, it will have three times greater accuracy and up to eight times improved anti-jamming capabilities over the previous GPS II satellite design block, which makes up today’s GPS constellation. GPS III also will be the first GPS satellite to broadcast the new L1C civil signal. Shared by other international global navigation satellite systems, like Galileo, the L1C signal will improve future connectivity worldwide for commercial and civilian users. The Air Force began modernizing the GPS constellation with new technology and capabilities with the December 23, 2018 launch of its first GPS III satellite. GPS III SV01 is now receiving and responding to commands from Lockheed Martin’s Launch and Checkout Center at the company’s Denver facility. Lockheed Martin shipped the U.S. Air Force’s first GPS III to Cape Canaveral, Florida ahead of its expected July launch. (Photo: Lockheed Martin} “After orbit raising and antenna deployments, we switched on GPS III SV01’s powerful signal-generating navigation payload and on Jan. 8 began broadcasting signals,” Johnathon Caldwell, Lockheed Martin’s Vice President for Navigation Systems. “Our on orbit testing continues, but the navigation payload’s capabilities have exceeded expectations and the satellite is operating completely healthy.” GPS III SV02 is the second of ten new GPS III satellites under contract and in full production at Lockheed Martin. GPS III SV03-08 are now in various stages of assembly and test. The Air Force declared the second GPS III “Available for Launch” in August and, in November, called GPS III SV02 up for its 2019 launch. In September 2018, the Air Force selected Lockheed Martin for the GPS III Follow On (GPS IIIF) program, an estimated $7.2 billion opportunity to build up to 22 additional GPS IIIF satellites with additional capabilities. GPS IIIF builds off Lockheed Martin’s existing modular GPS III, which was designed to evolve with new technology and changing mission needs. On September 26, the Air Force awarded Lockheed Martin a $1.4 billion contract for support to start up the program and to contract the 11th and 12th GPS III satellite. Once declared operational, GPS III SV01 and SV02 are expected to take their place in today’s 31 satellite strong GPS constellation, which provides positioning, navigation and timing services to more than four billion civil, commercial and military users. source: https://www.satellitetoday.com/launch/2019/03/26/lockheed-martin-ships-second-gps-iii-satellite/
  30. 1 point
    realy sad news WorldWind team would like to inform you that starting April 5, 2019, NASA WorldWind project will be suspended. All the WorldWind servers providing elevation and imagery will be unavailable. While you can still download the SDKs from GitHub, there will be no technical support. If you have questions and/or concerns, please feel free to email at: worldw[email protected] Update on March 21, 2019 - Answers to common questions about the suspension are available in the NASA WorldWind Project Suspension FAQ. source : https://worldwind.arc.nasa.gov/news/2019-03-08-suspension-notice/
  31. 1 point
    DARPA is seeking information on state-of-the-art technologies and methodologies for advanced mapping and surveying in support of the agency’s Subterranean (SubT) Challenge. Georeferenced data – geographic coordinates tied to a map or image – could significantly improve the speed and accuracy of warfighters in time-sensitive active combat operations and disaster-related missions in the subterranean domain. Today, the majority of the underground environments are uncharted or inadequately mapped, including human-made tunnels, underground infrastructure, and natural cave networks. Through the Request for Information, DARPA is looking for innovative technologies to collect highly accurate and reproducible ground-truth data for subterranean environments, which would potentially disrupt and positively leverage the subterranean domain without prohibitive cost and with less risk to human lives. These innovative technologies will allow for exploring and exploiting these dark and dirty environments that are too dangerous to deploy humans. “What makes subterranean areas challenging for precision mapping and surveying – such as lack of GPS, constrained passages, dark or dust-filled air – is similar to what inhibits safe and speedy underground operations for our warfighters,” said Timothy Chung, program manager in DARPA’s Tactical Technology Office (TTO). “Building an accurate three-dimensional picture is a key enabler to rapidly and remotely exploring and searching subterranean spaces.” DARPA is looking for commercial products, software, and services available to enable high-fidelity, 3D mapping and surveying of underground environments. Of interest are available technologies that offer high accuracy and high resolution, with the ability to provide precise and reproducible survey points without reliance on substantial infrastructure (e.g., access to global fixes underground). Additionally, relevant software should also allow for generated data products to be easily manipulated, annotated, and rendered into 3D mesh objects for importing into simulation and game engine environments. DARPA may select proposers to demonstrate their technologies or methods to determine feasibility of capabilities for potential use in the SubT Challenge in generating and sharing 3D datasets of underground environments. Such accurately georeferenced data may aid in scoring the SubT competitors’ performance in identifying and reporting the location of artifacts placed within the course. In addition, renderings from these data may provide DARPA with additional visualization assets to showcase competition activities in real-time and/or post-production. source : https://www.darpa.mil/news-events/2019-03-07
  32. 1 point
    You may find such NDVI data via satellite imagery service called LandViewer. This tool has a vast database of satellite imagery that is publicly available and is updated on a regular basis. You may set any Index you need to analyze the area of your needs or create any Index of your own. Besides that there are already ready-made tools for obtaining multispectral indices, flexible processing of data on AOI, elementary clustering, using a raster calculator, visualization of scenes in 3D using digital elevation models, changes in territories based on multi-temporal multispectral analysis, as well as creating ready-made animations of changes in terrain and so much more. Here’s a brief guide to types satellite data that can be found on LandViewer. High resolution satellite imagery: SPOT 6, 7 (up to 1.5 m/pxl) SPOT 5 (up to 2.5 m/pxl) Pléiades 1A, 1B (up to 0.5 m/pxl) KOMPSAT-2 (up to 1 m/pxl) KOMPSAT-3А (up to 0.4 m/pxl) KOMPSAT-3 (up to 0.5 m/pxl) SuperView-1 (up to 0.5 m/pxl) Both optical and radar data is available — with global coverage, and short revisiting period that varies from 2 to 5 days. Low & medium resolution imagery: Landsat 4 - archive 1982-1993 Landsat 5 - archive 1984-2013 Landsat 7 - archive since 1999 MODIS - archive since 2012 Landsat 8 - archive since 2013 Sentinel-1 - archive since 2014 Sentinel-2 - archive since 2015 An example of such imagery can be seen below: https://eos.com/landviewer/?lat=33.39447&amp;lng=52.68974&amp;z=11&amp;side=R&amp;slider-id=LV-TEM4-MTYz-MDM3-MjAx-MzM2-NExH-TjAw&amp;slider-b=Red,Green,Blue&amp;slider-anti&amp;slider-pansharpening&amp;id=LV-TEM4-MTYz-MDM3-MjAx-MzM2-NExH-TjAw&amp;b=NIR,Red&amp;expression=(B5-B4)%2F(B5%2BB4)&amp;anti&amp;pansharpening
  33. 1 point
    I think it is more like asset management and navigation for indoor. This is helpful for managing personnel or assets within roofed infrastructure (ie. a large factory, multistorey building or underground construction site who cannot use GPS) by utilizing IoT devices and GIS software. I heard about similar tech when I was at Esri Singapore for few days.
  34. 1 point
    see here : https://gis.stackexchange.com/questions/62624/questions-regarding-the-processing-of-mosaic-landsat-8 here the link that compare both method : http://www.faqs.org/faqs/sci/Satellite-Imagery-FAQ/part3/section-14.html some people prefer to mosaic first and then analysis like me, and some prefer analysis first and then mosaic you may read to see the differences and make a decision
  35. 1 point
  36. 1 point
    Hi Darksabersan, the links are dead, could you reactivate and perhaps upload to a different site like Mega.nz, please? thanks
  37. 1 point
    now we can ditch sandboxie, seems this feature will come in build 1903, finger crossed 😁 complete story: https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/Windows-Sandbox/ba-p/301849
  38. 1 point
    Hello everyone, there is a Terrain Model I wasn't aware of, so I would like to share with you some links. For Global project, I usually GTOPO30 as a global DTM, but recently I discovered GMTED2010 (Global Multi-resolution Terrain Elevation Data 2010 ) a free avaIlable data model of the world at 250m, 500m and 1km resolution. GMTED2010 provides a new level of detail in global topographic data. Previously, the best available global DEM was GTOPO30 with a horizontal grid spacing of 30 arc-seconds. The GMTED2010 product suite contains seven new raster elevation products for each of the 30-, 15-, and 7.5-arc-second spatial resolutions and incorporates the current best available global elevation data. The new elevation products have been produced using the following aggregation methods: minimum elevation, maximum elevation, mean elevation, median elevation, standard deviation of elevation, systematic subsample, and breakline emphasis. Metadata have also been produced to identify the source and attributes of all the input elevation data used to derive the output products. Many of these products will be suitable for various regional features for hydrologic modeling, and geometric and radiometand continental applications, such as climate modeling, continental-scale land cover mapping, extraction of drainage ric correction of medium and coarse resolution satellite image data. The global aggregated vertical accuracy of GMTED2010 can be summarized in terms of the resolution and RMSE of the products with respect to a global set of control points (estimated global accuracy of 6 m RMSE) provided by NGA. At 30 arc-seconds, the GMTED2010 RMSE range is between 25 and 42 meters; at 15 arc-seconds, the RMSE range is between 29 and 32 meters; and at 7.5 arc-seconds, the RMSE range is between 26 and 30 meters. GMTED2010 is a major improvement in consistency and vertical accuracy over GTOPO30, which has a 66 m RMSE globally compared to the same NGA control points. In areas where new sources of higher resolution data were available, the GMTED2010 products are substantially better than the aggregated global statistics; however, large areas still exist, particularly above 60 degrees North latitude, that lack good elevation data. As new data become available, especially in areas that have poor coverage in the current model, it is hoped that new versions of GMTED2010 might generated and thus gradually improve the global model. Link to Document https://pubs.er.usgs.gov/publication/ofr20111073 Link to Archive for Download as a folder https://edcintl.cr.usgs.gov/downloads/sciweb1/shared/topo/downloads/GMTED/ as a viewer https://topotools.cr.usgs.gov/gmted_viewer/viewer.htm Additionals https://topotools.cr.usgs.gov/gmted_viewer/
  39. 1 point
    maybe you can apply a statistical test to your both index there is a tool in IDRISI called CORRELATE you may try it
  40. 1 point
    do you have basic with any software involving this modelling/analysis steps? we can start from it everyone will give you different scenarios based on their expertise with their own softwares for me, as a GIS-er then ArcGIS with plugin for hydro analysis like ArcHydro or HECgeoRAS will do the job so tell us little bit more details in here 😁
  41. 1 point
    how bout this : http://webhelp.esri.com/arcgisdesktop/9.3/body.cfm?tocVisable=1&amp;ID=-1&amp;TopicName=How Corridor works it will do in raster
  42. 1 point
    Hello all, I built corridors between core forests using the "Linkage mapper"(https://circuitscape.org/linkagemapper/) which is a tool that could be used in ArcGIS. But the final corridor image is a raster file with values based on the eqaution: Normlaised least cost corridor value = Cost weighted distance from core forest A + Cost weighted distance from core forest B - COst weighted distance accumlated moving along the ideal path (least cost path) connecting the core forests A and B. I wanted to limit the width of corridors using some simple logic but since the values in the corridor raster is based on the above equation i find it hard to limit the width. Could someone explain this equation and how is it formulated. Thank you, Maya
  43. 1 point
    How to round up/down an pixel value of a raster to a pixel value with 2 decimal places? related - Round Raster to next higher or lower int
  44. 1 point
    It's an extension of ArcGIS Online so far I understood, just like Dashboard and Site. But publishing data to Urban probably has some tricks we need to follow.
  45. 1 point
    Hi, Please check out this tool - https://www.whatiswhere.com, which can be very useful in your research. Features: * OpenStreetMap based search which allows you to apply more than 1 criteria at once * Negative conditions (e.g. you could search for areas where some type of POI does not exist) * Access to global postal code information * EXPORT RESULTS TO CSV, which can be then uploaded to your GIS * Re-use of search projects Thanks, Andrei, WhatIsWhere www.whatiswhere.com
  46. 1 point
    just bought it, I dont know but It load faster than the fixed version, dont know if it is because no usage of license manager or may be other factors
  47. 1 point
    I need to work on spatial interpolation using multivariate regression of observed AWS precipitation data by introducing variables such as slope, aspect etc. Is there any option in ARCGIS?
  48. 1 point
    a new global WebMap prototype “OSMLanduse.org” has been launched by GIScience Research Group Heidelberg. The map provides worldwide Landuse/Landcover informationon the basis of OpenStreetMap (OSM) data. This is based on our earlier work on testing the suitability of OpenStreetMap for deriving landuse and landcover information (LULC). LULC data is highly relevant for many research questions and practical planning activites. Up to now there exist well known data sets generated from remote sensing imagery such as CORINE, Urban Atlas or GlobeLand30, which are available for different areas, time slots, and offer different LULC classifications. Yet it is an interesting question if and to what degree OpenStreetMap can complement, add to, or refine these sources. Up to now this is certainly very different in different world regions and the new map helps to explore and better understand this kind of data in OSM and how it is evolving in different areas. Therefore the aim is to evaluate the overall possibility and suitability of OpenStreetMap (OSM) data for these specific purposes, identify ways for improvement and to provide all this information globally to the interested communities in an automated way. In order to do so, the data from OpenStreetMap, stored in Key-Value pairs, was initially categorized similar to the classification level 2 of the CORINE Landcover classes. This category mapping and further pre-processing of OSM data will be further refined in ongoing work. A first basic WebMapping application has been set up with the use of free and open-source software, including PostgreSQL/PostGIS, Geoserver and MapProxy. At the moment the website provides basic WebGIS elements, such as legend, search function and feature info. Higher zoomlevels (>7) are updated on a minutely basis. In the near future we want to implement other useful features and explore ways to measure the quality of the data. Right now GIScience HD isworking on realizing a function, which provides statistical information about the landuse/landcover in a certain area (bbox), defined by the user. In particular also the mapping between common land use classifications such as CORINE, Urban Atlas etc. and the OpenStreetMap categorization of objects is refined and the preprocessing for different scales improved. site : http://osmlanduse.org/
  49. 1 point
    I hope that some of these books will be useful to this topic...... Quadcopters and Drones: A Beginner's Guide to Successfully Flying and Choosing the Right Drone by Mark D Smith http://avxhome.se/ebooks/1514708426.html How to Build a Quadcopter Drone: Everything you need to know about building your own Quadcopter Drone incorporated with pictures as a complete step-by-step guide. by Scott Russon http://avxhome.se/ebooks/B01COSMYMC.html Drone Masterclass: Your Complete Guide to DJI Drones MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 2 Hours | 1.85 GB Genre: eLearning | Language: English http://avxhome.se/ebooks/naag9136.html John Baichtal, "Building Your Own Drones: A Beginners' Guide to Drones, UAVs, and ROVs" http://avxhome.se/ebooks/078975598X.html Drones and Unmanned Aerial Systems: Legal and Social Implications for Security and Surveillance http://avxhome.se/ebooks/Politics_Sociology/Drones-Unmanned-Aerial-Systems-Implications.html Drones For Benefits: Guide about different types of drones with details by Nauman Ashraf http://avxhome.se/ebooks/B01BPFUXTQ.html Drones The Complete Manual 1st Edition http://avxhome.se/ebooks/DronesTheCompleteManual1stEdition.html Drones: Learn Aerial Photography and Videography Basics http://avxhome.se/ebooks/naag9492.html Aerial Photography and Videography Using Drones - Learn by Video Duration: Over 1 hour | Video: h264, yuv420p, 1920x1080 30fps | Audio: aac, 44100 Hz, 2ch | 1.04 GB Genre: eLearning | Language: English http://avxhome.se/ebooks/c2u2016555.html Drones: Personal Guide to Drones - Camera, Airframe, Radio & Power by Harry Jones http://avxhome.se/ebooks/1523710810.html Drones: Personal Guide to Drones - Camera, Airframe, Radio & Power by Harry Jones http://avxhome.se/ebooks/engeneering_technology/electronics/3829239.html Aerial Photography and Videography Using Drones by Eric Cheng http://avxhome.se/ebooks/0134122771.html Skillshare - Introduction to Aerial Videography: Creative Direction for Drone Filming http://avxhome.se/ebooks/Photo_related/IntroductionAerialVideography.html Grégoire Chamayou, "A Theory of the Drone" http://avxhome.se/ebooks/history_military/1595589759.html The Editors of Make, "DIY Drone and Quadcopter Projects: A Collection of Drone-Based Essays, Tutorials, and Projects" http://avxhome.se/ebooks/1680451294.html Drone Building Document by Bakanou Dragon http://avxhome.se/ebooks/B01C69WOCO.html Make: Getting Started with Drones: Build and Customize Your Own Quadcopter by Terry Kilby http://avxhome.se/ebooks/engeneering_technology/electronics/3708860.html The Drones Book 2nd Edition http://avxhome.se/ebooks/TheDronesBook2ndEdition.html UAV Drone Photos to Google Street Contributor http://avxhome.se/ebooks/naag8234.html Building Multicopter Video Drones by Ty Audronis http://avxhome.se/ebooks/engeneering_technology/electronics/34277063427706.html By 2026, Air Traffic Control for Drone Traffic in the USA as Drones Surpasses One Billion: Powered by Qmax-17.0 M. Lawrence Think Tank Solution Cortex by M. LAWRENCE http://avxhome.se/ebooks/B01DJB80UI.html Getting Started with Hobby Quadcopters and Drones: Learn about, buy and fly these amazing aerial vehicles by Craig S Issod http://avxhome.se/ebooks/engeneering_technology/electronics/3763899.html Drones: Step by step to build and fly a racing quadcopter. English | 2015 | mp4 | H264 1280x720 | AAC 2 ch | 5 hrs | pdf | 3.1 GB http://avxhome.se/ebooks/personality/Dronestepytep.html Udemy - UAV Drones: Precision Agriculture http://avxhome.se/ebooks/eLearning/uav-drones-precision-agriculture.html John Glover, "Drone University" http://avxhome.se/ebooks/0692316035.html Cinematic Drone Video Post-Production .MP4, AVC, 1000 kbps, 1280x720 | English, AAC, 64 kbps, 2 Ch | 1.4 hours | 1.29 GB Instructor: Charles Yeager http://avxhome.se/ebooks/cinematic-drone-video-post-production.html Ultimate guide to starting your own aerial filming business MP4 | Video: AVC 1280x720 | Audio: AAC 44KHz 2ch | Duration: 9 Hours | 5.22 GB Genre: eLearning | Language: English http://avxhome.se/ebooks/eLearning/naag7993.html
  50. 1 point
    Spectral Python (SPy) is a pure Python module for processing hyperspectral image data. SPy has functions for reading, displaying, manipulating, and classifying hyperspectral imagery. SPy can be used interactively from the Python command prompt or via Python scripts. SPy is free, open source software distributed under the GNU General Public License. To see some examples of how SPy can be used, you may want to jump straight to the documentation sections on Displaying Data or Spectral Algorithms. You can download SPy from the SPy Project Page hosted by Sourceforge.net. Install SPy http://spectralpython.sourceforge.net/installation.html User Guide http://spectralpython.sourceforge.net/user_guide.html
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.