Jump to content

All Activity

This stream auto-updates     

  1. Yesterday
  2. badar

    badar

  3. SpatialBand

    SpatialBand

  4. Last week
  5. Sorry for no update, here is the latest!

    Check my latest fixes (updated 24th June 2019) - http://www.mediafire.com/file/61joa3j8u4e51ii/list.txt

  6. This is an interesting topic from not quite an old webpage. I was searching for some use case of blockchain in geospatial context and found this. The contexts still challenging, but very noteworthy. What is a blockchain and how is it relevant for geospatial applications? (By Jonas Ellehauge, awesome map tools, Norway) A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Blockchain technology is already being used in some geospatial applications, as explained here. As an immutable registry for transactions of digital tokens, blockchain is suitable for geospatial applications involving data that is sensitive or a public good, autonomous devices and smart contracts. Use Cases The use cases are discussed further below. I have given a few short talks about this topic at various conferences, most recently at the international FOSS4G conference in Bonn, Germany, 2016. Public-good data Open Data is Still Centralised Data Over the past two decades, I have seen how ‘public-good’ geospatial data has generally become much easier to get hold of, having originally been very inaccessible to most people. Gradually, the software to display and process the data became cheaper or even free, but the data itself – data that people had already paid for through their taxes – remained inaccessible. Some national mapping institutions and cadastres began distributing the data via the internet, although mostly with a price tag. Only in recent years have a few countries in Europe made public map data freely accessible. In the meantime, projects like OpenStreetMap have emerged in order to meet people’s need for open data. It is hardly a surprise, then, that a myriad of new apps, mock-ups and business cases emerge in a region shortly after data is made available to the public there. Truly Public Open Data One of the reasons that this data has remained inaccessible for so long is that it is collected and distributed through a centralised organisation. A small group of people manage enormous repositories of geospatial data and can restrict or grant access to it. As I see it, this is where blockchain and related technologies like IPFS can enable people to build systems where the data is inherently public, no one controls it, anyone can access it, and anyone can review the full history of contributions to the data. Would it be free of charge to use data from such a system? Who would pay for it? I guess time will tell which business model is the most sustainable in that respect. OpenStreetMap is free to use, it is immensely popular and yet people gladly contribute to it – so who pays the cost for OSM? Bear in mind that there’s no such thing as ‘free data’. For example, the ‘free’ open data in Denmark today is paid for through taxes. So, even if it would cost a little to use the blockchain-based data, that wouldn’t be so different from now – just that no one would be able to restrict access to the data, plus the open nature of competing nodes and contributors will minimise the costs. Autonomous Devices & Apps Uber and Airbnb are examples of consumer applications that rely on geospatial data and processing. They represent a centralised approach where the middleman owns and controls the data and charges a significant fee for connecting clients and providers with each other. If such apps were replaced by distributed peer-to-peer systems, they could be cheaper and give their users full control of their data. There is already such an alternative to Uber called Arcade.City. A peer-to-peer market app like OpenBazar may also benefit from geospatial components with regards to e.g. search and logistics. Such autonomous apps may currently have to rely on third parties for their geospatial components – e.g. Google Maps, Mapbox, OpenStreetMap, etc. With access to truly publicly distributed data as described above, such apps would be even more reliable and cheaper to run. An autonomous device such as a drone or a self-driving car inherently runs an autonomous application, so these two concepts are heavily intertwined. There’s no doubt that self-navigating cars and drones will be a growing market in the near future. Uber and Tesla have big ambitions regarding cars, drones are being designed for delivery of consumer products (Amazon), and drone-based emergency response (drone defibrillator) and imaging (automatic selfie drone ‘Lily’) applications are emerging. Again, distributed peer-to-peer apps could cut out the middleman and reliance on third parties for their navigation and other geospatial components. Land Ownership What is Property? After some years in the GIS software industry, I realised that a very large part of my work revolved around cadastres/parcels and other administrative borders plus technical base maps featuring roads, buildings, etc. In view of my background in physical geography I thought that was pretty boring stuff and I dreamt about creating maps and applications that involved temperatures, wind, currents, salinity, terrain models, etc., because it felt more ‘real’. I gradually realised that something about administrative data was nagging me – as if it didn’t actually represent reality. Lately, I have taken an interest in philosophy about human interaction, voluntary association and self-ownership. It turns out that property is a moral, philosophical concept of assets acquired through voluntary transactions or homesteading. This perspective stretches at least as far back as John Locke in the 17th century. Such justly acquired property is reality, whereas law, governance services and computer code are systems that attempt to model reality. When such systems don’t fit reality, the system is wrong and should be dismissed, possibly adjusted or replaced. Land Ownership For the vast majority of people in many developing countries, there is no mapping of parcels or proof of ownership available to the actual landowners. Christiaan Lemmen, an expert on cadastres, has experience from field work to map parcels in developing countries such as Nigeria, Liberia, etc., where corruption can be a big challenge within land administration. In his experience, however, people mostly agree on who owns what in their local communities. These people often have a need for proof of identity and proof of ownership for their justly acquired land in order to generate wealth, invest in their future and prevent fraud – while they often face problems with inefficient, expensive or corrupt government services. Ideally, we could build inexpensive, reliable and easy-to-use blockchain-based systems that will enable people to map and register their land together with their neighbours – without involving any government officials, lawyers or other middlemen. Geodesic Grids It has been suggested to use geodesic grids of discrete cells to register land ownership on a blockchain. Such cells can be shaped, e.g. as squares, triangles, pentagons, hexagons, etc., and each cell has a unique identifier. In a traditional cadastral system, parcels are represented with flexible polygons, which allows users to register any possible shape of a parcel. Although a grid of discrete cells doesn’t allow such flexible polygons, it has an advantage in this case: each digital token on the blockchain (let’s call it a ‘Landcoin’) can represent one unique cell in the grid. Hence, whoever owns a particular Landcoin owns the corresponding piece of land. Owning such a Landcoin means possessing the private encryption key that controls it – which is how other cryptocurrencies work. In order to represent complex and high-resolution geometries, it is preferable to use a grid which is infinitely sub-divisible so that ever-smaller triangles, hexagons or squares, etc., can be tied together to represent any piece of land. A digital token can also be infinitely sub-divisible. For comparison, the smallest unit of a Bitcoin is currently a 100-millionth – aka a ‘Satoshi’. If needed, the core software could be upgraded to support even smaller units. What is a Blockchain? A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Safer – because no one controls all the data (known as root privilege in existing databases). Each entry has its own pair of public and private encryption keys and only the holder of the private key can unlock the entry and transfer it to someone else. Immutable – because each block of entries (added every 1-10 minutes) carries a unique hash ‘fingerprint’ of the previous block. Hence, older blocks cannot be tampered with. Cheaper – because anyone can set up a node and get paid in digital tokens (e.g. Bitcoin or Ether) for hosting a blockchain. This ensures that competition between nodes will minimise the cost of hosting it. It also saves the costs of massive security layers that otherwise apply to servers with sensitive data – this is because of the no-root-privilege security model and, with old entries being immutable, there’s little need to protect them. Resilient – because there is no single point of failure, there’s practically nothing to attack. In order to compromise a blockchain, you’d have to hack each individual user one by one in order to get hold of their private encryption keys that give access to that user’s data only. Another option is to run over 50% of the nodes, which is virtually impossible and economically impractical. Transparency and accountability – the fact that existing entries cannot be tampered with makes a blockchain a transparent source of truth and history for your application. The public nature of it makes it easy to hold people accountable for their activities. Control – the immutable and no-root-privilege character puts each user in full control of his/her own data using the private encryption keys. This leads to real peer-to-peer interaction without any middleman and without an administrator that can deny users access to their data. Trustless – because each user fully controls his/her own data, users can safely interact without knowing or trusting each other and without any trusted third parties. Smart Contracts and DAPPs A blockchain can be more than a passive registry of entries or transactions. The original Bitcoin blockchain supports limited scripting allowing for programmable transactions and smart contracts – e.g. where specified criteria must be fulfilled leading to transactions automatically taking place. Possibly the most popular alternative to Bitcoin is Ethereum, which is a multi-purpose blockchain with a so-called ‘Turing complete’ programming interface, which allows developers to create virtually any imaginable application on this platform. Such applications are referred to as decentralised autonomous applications (DAPPs) and are virtually impossible for third parties to stop or censor. [1] IFPS IPFS is a distributed file system and web protocol, which can complement or even replace HTTP. Instead of referring to files by their location on a host or IP address, it refers to files by their content. This means that when requested, IPFS will return the content from the nearest possible or even multiple computers rather than from a central server. That could be on the computer next to you, on your local network or somewhere in the neighbourhood. Jonas Ellehauge is an expert on geospatial software, GIS and web development, enthusiastic about open source, Linux and UI/UX. Ellehauge is passionate about science, philosophy, entrepreneurship, economy and communication. His background in physical geography provides extensive knowledge of spatial analyses and spatial problem solving.
  7. Earlier
  8. Please reactivate my account. Not been around for a long time, but I would appreciate to be let back. Than you.
  9. Wow, this is great news! We always had bottlenecking problems even if we had the "best" configuration and schema for a certain use case, maybe this is one step closer to minimize that.
  10. evrglo

    evrglo

  11. Please reactive my account. I have some work related to GIS. Thanks
  12. from their official sites : NOAA’s flagship weather model — the Global Forecast System (GFS) — is undergoing a significant upgrade today to include a new dynamical core called the Finite-Volume Cubed-Sphere (FV3). This upgrade will drive global numerical weather prediction into the future with improved forecasts of severe weather, winter storms, and tropical cyclone intensity and track. NOAA research scientists originally developed the FV3 as a tool to predict long-range weather patterns at time frames ranging from multiple decades to interannual, seasonal and subseasonal. In recent years, creators of the FV3 at NOAA’s Geophysical Fluid Dynamics Laboratory expanded it to also become the engine for NOAA’s next-generation operational GFS. “In the past few years, NOAA has made several significant technological leaps into the future – from new satellites in orbit to this latest weather model upgrade,” said Secretary of Commerce Wilbur Ross. “Through the use of this advanced model, the dedicated scientists, forecasters, and staff at NOAA will remain ever-alert for any threat to American lives and property.” The FV3-based GFS brings together the superior dynamics of global climate modeling with day-to-day reliability and speed of operational numerical weather prediction. Additional enhancements to the science that produce rain and snow in the GFS also contribute to the improved forecasting capability of this upgrade. “The significant enhancements to the GFS, along with creating NOAA’s new Earth Prediction Innovation Center, are positioning the U.S. to reclaim international leadership in the global earth-system modeling community,” said Neil Jacobs, Ph.D., acting NOAA administrator. The GFS upgrade underwent rigorous testing led by NOAA’s National Centers for Environmental Prediction (NCEP) Environmental Modeling Center and NCEP Central Operations that included more than 100 scientists, modelers, programmers and technicians from around the country. With real-time evaluations for a year alongside the previous version of the GFS, NOAA carefully documented the strengths of each. When tested against historic weather dating back an additional three years, the upgraded FV3-based GFS performed better across a wide range of weather phenomena. The scientific and performance evaluation shows that the upgraded FV3-based GFS provides results equal to or better than the current global model in many measures. This upgrade establishes the foundation to further advancements in the future as we improve observation quality control, data assimilation, and the model physics. “We are excited about the advancements enabled by the new GFS dynamical core and its prospects for the future,” said Louis W. Uccellini, Ph.D., director, NOAA’s National Weather Service. “Switching out the dynamical core will have significant impact on our ability to make more accurate 1-2 day forecasts and increase the level of accuracy for our 3-7 day forecasts. However, our job doesn't end there — we also have to improve the physics as well as the data assimilation system used to ingest data and initialize the model.” Uccellini explained that NOAA’s work with the National Center for Atmospheric Research to build a common infrastructure between the operational and research communities will help advance the FV3-based GFS beyond changing the core. “This new dynamical core and our work with NCAR will accelerate the transition of research advances into operations to produce even more accurate forecasts in the future,” added Uccellini. Operating a new and sophisticated weather model requires robust computing capacity. In January 2018, NOAA augmented its weather and climate supercomputing systems to increase performance by nearly 50 percent and added 60 percent more storage capacity to collect and process weather, water and climate observations. This increased capacity enabled the parallel testing of the FV3-based GFS throughout the year. The retiring version of the model will no longer be used in operations but will continue to run in parallel through September 2019 to provide model users with data access and additional time to compare performance. source: https://www.noaa.gov/media-release/noaa-upgrades-us-global-weather-forecast-model
  13. Meterology revolves as much around good weather models as it does good weather data, and the core US model is about to receive a long-overdue refresh. NOAA has upgraded its Global Forecast System with a long-in-testing dynamical core, the Finite-Volume Cubed-Sphere (aka FV3). It's the first time the model has been replaced in roughly 40 years, and it promises higher-resolution forecasts, lower computational overhead and more realistic water vapor physics. The results promise to be tangible. NOAA believes there will be a "significant impact" to one- and two-day forceasts, and improve the overall accuracy for forecasts up to a week ahead. It also hopes for further improvements to both the physics as well as the system that ingests data and invokes the weather model. This is on top of previous upgrades to NOAA supercomputers that should provide more capacity. The old model will still hang around through September, although not as a backup -- it's strictly there for data access and performance comparisons. FV3 had been chosen years ago to replace the old core, and it's been in parallel testing for over a year. Not everyone is completely satisfied with the new model. Ars Technica pointed out that the weather community is concerned about surface temperatures that have skewed low, for instance. It should be more accurate overall, though, and that could be crucial for tracking hurricanes, blizzards and other serious weather patterns that can evolve by the hour. source: https://www.engadget.com/2019/06/12/us-weather-forecast-model-update
  14. Consider adding these functionalities, changing inline font to monospace or other Github Gist support helps a lot, since it can draw feedbacks directly to the code
  15. ivn888

    ivn888

  16. Hello everyone ! This is a quick Python code which I wrote to batch download and preprocess Sentinel-1 images of a given time. Sentinel images have very good resolution and makes it obvious that they are huge in size. Since I didn’t want to waste all day preparing them for my research, I decided to write this code which runs all night and gives a nice image-set in following morning. import os import datetime import gc import glob import snappy from sentinelsat import SentinelAPI, geojson_to_wkt, read_geojson from snappy import ProductIO class sentinel1_download_preprocess(): def __init__(self, input_dir, date_1, date_2, query_style, footprint, lat=24.84, lon=90.43, download=False): self.input_dir = input_dir self.date_start = datetime.datetime.strptime(date_1, "%d%b%Y") self.date_end = datetime.datetime.strptime(date_2, "%d%b%Y") self.query_style = query_style self.footprint = geojson_to_wkt(read_geojson(footprint)) self.lat = lat self.lon = lon self.download = download # configurations self.api = SentinelAPI('scihub_username', 'scihub_passwd', 'https://scihub.copernicus.eu/dhus') self.producttype = 'GRD' # SLC, GRD, OCN self.orbitdirection = 'ASCENDING' # ASCENDING, DESCENDING self.sensoroperationalmode = 'IW' # SM, IW, EW, WV def sentinel1_download(self): global download_candidate if self.query_style == 'coordinate': download_candidate = self.api.query('POINT({0} {1})'.format(self.lon, self.lat), date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) elif self.query_style == 'footprint': download_candidate = self.api.query(self.footprint, date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) else: print("Define query attribute") title_found_sum = 0 for key, value in download_candidate.items(): for k, v in value.items(): if k == 'title': title_info = v title_found_sum += 1 elif k == 'size': print("title: " + title_info + " | " + v) print("Total found " + str(title_found_sum) + " title of " + str(self.api.get_products_size(download_candidate)) + " GB") os.chdir(self.input_dir) if self.download: if glob.glob(input_dir + "*.zip") not in [value for value in download_candidate.items()]: self.api.download_all(download_candidate) print("Nothing to download") else: print("Escaping download") # proceed processing after download is complete self.sentinel1_preprocess() def sentinel1_preprocess(self): # Get snappy Operators snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() # HashMap Key-Value pairs HashMap = snappy.jpy.get_type('java.util.HashMap') for folder in glob.glob(self.input_dir + "\*"): gc.enable() if folder.endswith(".zip"): timestamp = folder.split("_")[5] sentinel_image = ProductIO.readProduct(folder) if self.date_start <= datetime.datetime.strptime(timestamp[:8], "%Y%m%d") <= self.date_end: # add orbit file self.sentinel1_preprocess_orbit_file(timestamp, sentinel_image, HashMap) # remove border noise self.sentinel1_preprocess_border_noise(timestamp, HashMap) # remove thermal noise self.sentinel1_preprocess_thermal_noise_removal(timestamp, HashMap) # calibrate image to output to Sigma and dB self.sentinel1_preprocess_calibration(timestamp, HashMap) # TOPSAR Deburst for SLC images if self.producttype == 'SLC': self.sentinel1_preprocess_topsar_deburst_SLC(timestamp, HashMap) # multilook self.sentinel1_preprocess_multilook(timestamp, HashMap) # subset using a WKT of the study area self.sentinel1_preprocess_subset(timestamp, HashMap) # finally terrain correction, can use local data but went for the default self.sentinel1_preprocess_terrain_correction(timestamp, HashMap) # break # try this if you want to check the result one by one def sentinel1_preprocess_orbit_file(self, timestamp, sentinel_image, HashMap): start_time_processing = datetime.datetime.now() orb = self.input_dir + "\\orb_" + timestamp if not os.path.isfile(orb + ".dim"): parameters = HashMap() orbit_param = snappy.GPF.createProduct("Apply-Orbit-File", parameters, sentinel_image) ProductIO.writeProduct(orbit_param, orb, 'BEAM-DIMAP') # BEAM-DIMAP, GeoTIFF-BigTiff print("orbit file added: " + orb + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + orb) def sentinel1_preprocess_border_noise(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() border = self.input_dir + "\\bordr_" + timestamp if not os.path.isfile(border + ".dim"): parameters = HashMap() border_param = snappy.GPF.createProduct("Remove-GRD-Border-Noise", parameters, ProductIO.readProduct(self.input_dir + "\\orb_" + timestamp + ".dim")) ProductIO.writeProduct(border_param, border, 'BEAM-DIMAP') print("border noise removed: " + border + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + border) def sentinel1_preprocess_thermal_noise_removal(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() thrm = self.input_dir + "\\thrm_" + timestamp if not os.path.isfile(thrm + ".dim"): parameters = HashMap() thrm_param = snappy.GPF.createProduct("ThermalNoiseRemoval", parameters, ProductIO.readProduct(self.input_dir + "\\bordr_" + timestamp + ".dim")) ProductIO.writeProduct(thrm_param, thrm, 'BEAM-DIMAP') print("thermal noise removed: " + thrm + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + thrm) def sentinel1_preprocess_calibration(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() calib = self.input_dir + "\\calib_" + timestamp if not os.path.isfile(calib + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) calib_param = snappy.GPF.createProduct("Calibration", parameters, ProductIO.readProduct(self.input_dir + "\\thrm_" + timestamp + ".dim")) ProductIO.writeProduct(calib_param, calib, 'BEAM-DIMAP') print("calibration complete: " + calib + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + calib) def sentinel1_preprocess_topsar_deburst_SLC(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() deburst = self.input_dir + "\\dburs_" + timestamp if not os.path.isfile(deburst): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) deburst_param = snappy.GPF.createProduct("TOPSAR-Deburst", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(deburst_param, deburst, 'BEAM-DIMAP') print("deburst complete: " + deburst + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + deburst) def sentinel1_preprocess_multilook(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() multi = self.input_dir + "\\multi_" + timestamp if not os.path.isfile(multi + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) multi_param = snappy.GPF.createProduct("Multilook", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(multi_param, multi, 'BEAM-DIMAP') print("multilook complete: " + multi + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + multi) def sentinel1_preprocess_subset(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() subset = self.input_dir + "\\subset_" + timestamp if not os.path.isfile(subset + ".dim"): WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader') # converting shapefile to GEOJSON and WKT is easy with any free online tool wkt = "POLYGON((92.330290184197 20.5906091141114,89.1246637610338 21.6316051481971," \ "89.0330319081811 21.7802436586492,88.0086282580443 24.6678836192818,88.0857830091018 " \ "25.9156771178278,88.1771488779853 26.1480664053835,88.3759125970998 26.5942658997298," \ "88.3876586919721 26.6120432770312,88.4105534167129 26.6345128356038,89.6787084683935 " \ "26.2383305017275,92.348481691233 25.073636976939,92.4252199249342 25.0296592837972," \ "92.487261172615 24.9472465376954,92.4967290851295 24.902213855393,92.6799861774377 " \ "21.2972058618174,92.6799346581579 21.2853347419811,92.330290184197 20.5906091141114))" geom = WKTReader().read(wkt) parameters = HashMap() parameters.put('geoRegion', geom) subset_param = snappy.GPF.createProduct("Subset", parameters, ProductIO.readProduct(self.input_dir + "\\multi_" + timestamp + ".dim")) ProductIO.writeProduct(subset_param, subset, 'BEAM-DIMAP') print("subset complete: " + subset + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + subset) def sentinel1_preprocess_terrain_correction(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() terr = self.input_dir + "\\terr_" + timestamp if not os.path.isfile(terr + ".dim"): parameters = HashMap() # parameters.put('demResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('imgResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('pixelSpacingInMeter', 10.0) terr_param = snappy.GPF.createProduct("Terrain-Correction", parameters, ProductIO.readProduct(self.input_dir + "\\subset_" + timestamp + ".dim")) ProductIO.writeProduct(terr_param, terr, 'BEAM-DIMAP') print("terrain corrected: " + terr + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + terr) input_dir = "path_to_project_folder\Sentinel_1" start_date = '01Mar2019' end_date = '10Mar2019' query_style = 'footprint' # 'footprint' to use a GEOJSON, 'coordinate' to use a lat-lon footprint = 'path_to_project_folder\bd_bbox.geojson' lat = 26.23 lon = 88.56 sar = sentinel1_download_preprocess(input_dir, start_date, end_date, query_style, footprint, lat, lon, True) # proceed to download by setting 'True', default is 'False' sar.sentinel1_download() The geojson file is created from a very generalised shapefile of Bangladesh by using ArcGIS Pro. There are a lot of free online tools to convert shapefile to geojson and WKT. Notice that the code will skip download if the file is already there but will keep the processing on, so comment out line 197 when necessary. Updated the code almost completely. The steps of processing raw files of Sentinel-1 used here are not the most generic way, note that there are no authentic way for this. Since different research require different steps to prepare raw data, you will need to follow yours. Also published at clubgis.
  17. Free Urban Analysis Toolbox Contains ARCPY tools for Urban Planners. Now its Developing and of-coarse FREE. At the moment you can download it at this ADDRESS. Remember before use check out for latest version. 2 tools are available: Land Use Entropy Index Calculator & Modified Huff Gravity Model (added custom Distance Decay Functions). Hope you enjoy Developer is [email protected] which is unknown
  18. Where can i find some GIS & RS jobs, working from home ? Thanks
  19. nice share Bro!! loved with the improvement of this PostgreSQL
  20. forget i have this forum, will be back, please resurrect lurker✌️
  21. andemik

    andemik

  22. multifunction casing. you can run 3d games and grating cheese for your hamburger. excelent thought apple as always LOL
  23. We are all already familiar with GPS navigation outdoors and what wonders it does not only for our everyday life, but also for business operations. Outdoor maps, allowing for navigation via car or by foot, have long helped mankind to find even the most remote and hidden places. Increased levels of efficiency, unprecedented levels of control over operational processes, route planning, monitoring of deliveries, safety and security regulations and much more have been made possible. Some places are, however, harder to reach and navigate than others. For instance, places like big indoor areas – universities, hospitals, airports, convention centers or factories, among others. Luckily, that struggle is about to become a thing of the past. So what’s the solution for navigating through and managing complex indoor buildings? Indoor Mapping and Visualization with ArcGIS Indoors The answer is simple – indoor mapping. Indoor mapping is a revolutionary concept that visualizes an indoor venue and spatial data on a digital 2D or 3D map. Showing places, people and assets on a digital map enables solutions such as indoor positioning and navigation. These, in turn, allow for many different use cases that help companies optimize their workflows and efficiencies. Mobile Navigation and Data The idea behind this solution is the same as outdoor navigation, only instead it allows you to see routes and locate objects and people in a closed environment. As GPS signals are not available indoors, different technology solutions based on either iBeacons, WiFi or lighting are used to create indoor maps and enable positioning services. You can plan a route indoors from point A to point B with customized pins and remarks, analyze whether facilities are being used to their full potential, discover new business opportunities, evaluate user behaviors and send them real-time targeted messages based on their location, intelligently park vehicles, and the list goes on! With the help of geolocation, indoor mapping stores and provides versatile real-time data on everything that is happening indoors, including placements and conditions of assets and human movements. This allows for a common operating picture, where all stakeholders share the same level of information and insights into internal processes. Having a centralized mapping system enables effortless navigation through all the assets and keeps facility managers updated on the latest changes, which ultimately improves business efficiency. Just think how many operational insights can be received through visualizations of assets on your customized map – you can monitor and analyze the whole infrastructure and optimize the performance accordingly. How to engage your users/visitors at the right time and place? What does it take to improve security management? Are the workflow processes moving seamlessly? Answers to those and many other questions can be found in an indoor mapping solution. Interactive indoor experiences are no longer a thing of the future, they are here and now. source: https://www.esri.com/arcgis-blog/products/arcgis-indoors/mapping/what-is-indoor-mapping/
  24. This year at WWDC 2019, Apple unveiled a cheese grater and called it the new Mac Pro. But, to see the 2019 Mac Pro once is enough to remember it for a long time. Specs - According to Apple website, you can spend as much as $35,000+ in it !! 🤯😬 Source
  25. found this interesting tutorial : For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us. Initially, the results were pretty bad. With PostgreSQL 10, it was possible to force some parallel queries by jimmying with global cost parameters, but nothing would execute in parallel out of the box. With PostgreSQL 11, we got support for parallel aggregates, and those tended to parallelize in PostGIS right out of the box. However, parallel scans still required some manual alterations to PostGIS function costs, and parallel joins were basically impossible to force no matter what knobs you turned. With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of: Parallel sequence scans, Parallel aggregates, and Parallel joins!! TL;DR: PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions. What Changed With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour. For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively. Setup In order to run these tests yourself, you will need: PostgreSQL 12 PostGIS 3.0 You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best. The setup instructions show where to download the Canadian polling division data used for the testing: pd a table of ~70K polygons pts a table of ~70K points pts_10 a table of ~700K points pts_100 a table of ~7M points We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes. When max_parallel_workers_per_gather is set to 0, parallel plans are not an option. max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2. Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker. show max_worker_processes; show max_parallel_workers; show max_parallel_workers_per_gather; Aggregates Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT Sum(ST_Area(geom)) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. Scans The simplest spatial parallel scan adds a spatial function to the target list or filter clause. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT ST_Area(geom) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11. Gather (cost=1000.00..27361.20 rows=69534 width=8) Workers Planned: 3 -> Parallel Seq Scan on pd (cost=0.00..19407.80 rows=22430 width=8) Joins Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get: SET max_parallel_workers_per_gather = 4; EXPLAIN SELECT * FROM pd JOIN pts_100 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..837378459.28 rows=5322553884 width=2579) Workers Planned: 4 -> Nested Loop (cost=0.28..305122070.88 rows=1330638471 width=2579) -> Parallel Seq Scan on pts_100 pts (cost=0.00..75328.50 rows=1738350 width=40) -> Index Scan using pd_geom_idx on pd (cost=0.28..175.41 rows=7 width=2539) Index Cond: (geom && pts.geom) Filter: st_intersects(geom, pts.geom) The only quirk in this plan is that the nested loop join is being driven by the pts_100 table, which has 10 times the number of records as the pd table. The plan for a query against the pt_10 table also returns a parallel plan, but with pd as the driving table. EXPLAIN SELECT * FROM pd JOIN pts_10 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..85251180.90 rows=459202963 width=2579) Workers Planned: 3 -> Nested Loop (cost=0.29..39329884.60 rows=148129988 width=2579) -> Parallel Seq Scan on pd (cost=0.00..13800.30 rows=22430 width=2539) -> Index Scan using pts_10_gix on pts_10 pts (cost=0.29..1752.13 rows=70 width=40) Index Cond: (geom && pd.geom) Filter: st_intersects(pd.geom, geom) source: http://blog.cleverelephant.ca/2019/05/parallel-postgis-4.html
  26. welcome and enjoy your stay here
  27. Hi guys. My name is Joelson. Thanks for the all posts in this forum.
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.