Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 03/21/2019 in all areas

  1. geemap is a Python package for interactive mapping with Google Earth Engine (GEE), which is a cloud computing platform with a multi-petabyte catalog of satellite imagery and geospatial datasets. During the past few years, GEE has become very popular in the geospatial community and it has empowered numerous environmental applications at local, regional, and global scales. GEE provides both JavaScript and Python APIs for making computational requests to the Earth Engine servers. Compared with the comprehensive documentation and interactive IDE (i.e., GEE JavaScript Code Editor) of the GEE JavaScript API, the GEE Python API lacks good documentation and functionality for visualizing results interactively. The geemap Python package is created to fill this gap. It is built upon ipyleaflet and ipywidgets, enabling GEE users to analyze and visualize Earth Engine datasets interactively with Jupyter notebooks. geemap is intended for students and researchers, who would like to utilize the Python ecosystem of diverse libraries and tools to explore Google Earth Engine. It is also designed for existing GEE users who would like to transition from the GEE JavaScript API to Python API. The automated JavaScript-to-Python conversion module of the geemap package can greatly reduce the time needed to convert existing GEE JavaScripts to Python scripts and Jupyter notebooks. For video tutorials and notebook examples, please visit https://github.com/giswqs/geemap/tree/master/examples. For complete documentation on geemap modules and methods, please visit https://geemap.readthedocs.io/en/latest/source/geemap.html. Features Below is a partial list of features available for the geemap package. Please check the examples page for notebook examples, GIF animations, and video tutorials. Automated conversion from Earth Engine JavaScripts to Python scripts and Jupyter notebooks. Displaying Earth Engine data layers for interactive mapping. Supporting Earth Engine JavaScript API-styled functions in Python, such as Map.addLayer(), Map.setCenter(), Map.centerObject(), Map.setOptions(). Creating split-panel maps with Earth Engine data. Retrieving Earth Engine data interactively using the Inspector Tool. Interactive plotting of Earth Engine data by simply clicking on the map. Converting data format between GeoJSON and Earth Engine. Using drawing tools to interact with Earth Engine data. Using shapefiles with Earth Engine without having to upload data to one's GEE account. Exporting Earth Engine FeatureCollection to other formats (i.e., shp, csv, json, kml, kmz) using only one line of code. Exporting Earth Engine Image and ImageCollection as GeoTIFF. Extracting pixels from an Earth Engine Image into a 3D numpy array. Calculating zonal statistics by group (e.g., calculating land over composition of each state/country). Adding a customized legend for Earth Engine data. Converting Earth Engine JavaScripts to Python code directly within Jupyter notebook. Adding animated text to GIF images generated from Earth Engine data. Adding colorbar and images to GIF animations generated from Earth Engine data. Creating Landsat timelapse animations with animated text using Earth Engine. Searching places and datasets from Earth Engine Data Catalog. Using timeseries inspector to visualize landscape changes over time. Exporting Earth Engine maps as HTML files and PNG images. Searching Earth Engine API documentation within Jupyter notebooks. Installation To use geemap, you must first sign up for a Google Earth Engine account. geemap is available on PyPI. To install geemap, run this command in your terminal: pip install geemap geemap is also available on conda-forge. If you have Anaconda or Miniconda installed on your computer, you can create a conda Python environment to install geemap: conda create -n gee python conda activate gee conda install -c conda-forge geemap If you have installed geemap before and want to upgrade to the latest version, you can run the following command in your terminal: pip install -U geemap If you use conda, you can update geemap to the latest version by running the following command in your terminal: conda update -c conda-forge geemap Usage Important note: A key difference between ipyleaflet and folium is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only (source). Note that Google Colab currently does not support ipyleaflet (source). Therefore, if you are using geemap with Google Colab, you should use import geemap.eefolium. If you are using geemap with binder or a local Jupyter notebook server, you can use import geemap, which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). Youtube tutorial videos GitHub page of geemap Documentation While working on a small project I found this. This is a quite new library, some features shown in the tutorial may not work as intended but overall a very good package. The tools make the code much cleaner and readable. Searching EE docs from notebook is not yet implemented. Check out the youtube channel, it's great.
    5 points
  2. World Resources Institute and Google announced 10m resolution global land cover data called Dynamic World powered by Google Earth Engine and AI Platform. Dynamic World is a 10m near-real-time LULC dataset that includes nine classes and available for the Sentinel-2 L1C collection from 2015 until today.
    4 points
  3. At this time, USGS Landsat 9 Collection 2 Level-1 and Level-2 data will be made available for download from EarthExplorer, Machine to Machine (M2M), and LandsatLook. Initially, USGS will provide only full-bundle downloads. USGS will provide single band downloads and browse images, and Landsat 9 Collection 2 U.S. Analysis Ready Data shortly thereafter. Commercial cloud data distribution will take 3-5 days to reach full capacity. The recently deployed Landsat 9 satellite passed its post-launch assessment review and is now operational. This milestone marks the beginning of the satellite’s mission to extend Landsat's unparalleled, 50-year record of imaging Earth’s land surfaces, surface waters, and coastal regions from space. Landsat 9 launched September 27, 2021, from Vandenberg Space Force Base in California. The satellite carries two science instruments, the Operational Land Imager 2 (OLI-2) and the Thermal Infrared Sensor 2 (TIRS-2). The OLI–2 captures observations of the Earth’s surface in visible, near-infrared, and shortwave-infrared bands, and TIRS-2 measures thermal infrared radiation, or heat, emitted from the Earth’s surface. Landsat 9 improvements include higher radiometric resolution for OLI-2 (14-bit quantization increased from 12-bits for Landsat 8), enabling sensors to detect more subtle differences, especially over darker areas such as water or dense forests. With this higher radiometric resolution, Landsat 9 can differentiate 16,384 shades of a given wavelength. In comparison, Landsat 8 provides 12-bit data and 4,096 shades, and Landsat 7 detects only 256 shades with its 8-bit resolution. In addition to the OLI-2 improvement, TIRS-2 has significantly reduced stray light compared to the Landsat 8 TIRS, which enables improved atmospheric correction and more accurate surface temperature measurements. All commissioning and calibration activities show Landsat 9 performing just as well, if not better, than Landsat 8. In addition to routine calibration methods (i.e., on-board calibration sources, lunar observations, pseudo invariant calibration sites (PICS), and direct field in situ measurements), an underfly of Landsat 9 with Landsat 8 in mid-November 2021 provided cross-calibration between the two satellites’ onboard instruments, ensuring data consistency across the Landsat Collection 2 archive. Working in tandem with Landsat 8, Landsat 9 will provide major improvements to the nation’s land imaging, sustainable resource management, and climate science capabilities. Landsat’s imagery provides a landscape-level view of the land surface, surface waters (inland lakes and rivers) and coastal zones, and the changes that occur from both natural processes and human-induced activity. “Landsat 9 is distinctive among Earth observation missions because it carries the honor to extend the 50-year Landsat observational record into the next 50 years,” said Chris Crawford, USGS Landsat 9 Project Scientist. Partnered in orbit with Landsat 8, Landsat 9 will ensure continued eight-day global land and near-shore revisit.” Since October 31, 2021, Landsat 9 has collected over 57,000 images of the planet and will collect approximately 750 images of Earth each day. These images will be processed, archived, and distributed from the USGS Earth Resources Observation and Science (EROS) Center in Sioux Falls, South Dakota. Since 2008, the USGS Landsat Archive has provided more than 100 million images to data users around the world, free of charge. Landsat 9 is a joint mission between the USGS and NASA and is the latest in the Landsat series of remote sensing satellites. The Landsat Program has been providing global coverage of landscape change since 1972. Landsat’s unique long-term data record provides the basis for a critical understanding of environmental and climate changes occurring in the United States and around the world. Data Availability Learn more about Landsat 9 data access Visit the Landsat 9 webpages to learn more about the latest mission: USGS Landsat 9 NASA Landsat 9
    4 points
  4. Here is an interesting review: http://www.50northspatial.org/uav-image-processing-software-photogrammetry/ 😉😊
    4 points
  5. The open-source model will serve as the basis for future forest, crop and climate change-monitoring AI. NASA estimates that its Earth science missions will generate around a quarter million terabytes of data in 2024 alone. In order for climate scientists and the research community efficiently dig through these reams of raw satellite data, IBM, HuggingFace and NASA have collaborated to build an open-source geospatial foundation model that will serve as the basis for a new class of climate and Earth science AIs that can track deforestation, predict crop yields and rack greenhouse gas emissions. For this project, IBM leveraged its recently-released Watsonx.ai to serve as the foundational model using a year’s worth of NASA’s Harmonized Landsat Sentinel-2 satellite data (HLS). That data is collected by the ESA’s pair of Sentinel-2 satellites, which are built to acquire high resolution optical imagery over land and coastal regions in 13 spectral bands. For it’s part, HuggingFace is hosting the model on its open-source AI platform. According to IBM, by fine-tuning the model on “labeled data for flood and burn scar mapping,” the team was able to improve the model's performance 15 percent over the current state of the art using half as much data. "The essential role of open-source technologies to accelerate critical areas of discovery such as climate change has never been clearer,” Sriram Raghavan, VP of IBM Research AI, said in a press release. “By combining IBM’s foundation model efforts aimed at creating flexible, reusable AI systems with NASA’s repository of Earth-satellite data, and making it available on the leading open-source AI platform, Hugging Face, we can leverage the power of collaboration to implement faster and more impactful solutions that will improve our planet.” source: engadget
    3 points
  6. Our objective is to provide the scientific and civil communities with a state-of-the-art global digital elevation model (DEM) derived from a combination of Shuttle Radar Topography Mission (SRTM) processing improvements, elevation control, void-filling and merging with data unavailable at the time of the original SRTM production: NASA SRTM DEMs created with processing improvements at full resolution NASA's Ice, Cloud,and land Elevation Satellite (ICESat)/Geoscience Laser Altimeter (GLAS) surface elevation measurements DEM cells derived from stereo optical methods using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data from the Terra satellite Global DEM (GDEM) ASTER products developed for NASA and the Ministry of Economy, Trade and Industry of Japan by Sensor Information Laboratory Corp National Elevation Data for US and Mexico produced by the USGS Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) developed by the USGS and the National Geospatial-Intelligence Agency (NGA) Canadian Digital Elevation Data produced by Natural Resources Canada We propose a significant modernization of the publicly- and freely-available DEM data. Accurate surface elevation information is a critical component in scientific research and commercial and military applications. The current SRTM DEM product is the most intensely downloaded dataset in NASA history. However, the original Memorandum of Understanding (MOU) between NASA and NGA has a number of restrictions and limitations; the original full resolution, one-arcsecond data are currently only available over the US and the error, backscatter and coherence layers were not released to the public. With the recent expiration of the MOU, we propose to reprocess the original SRTM raw radar data using improved algorithms and incorporating ancillary data that were unavailable during the original SRTM processing, and to produce and publicly release a void-free global one-arcsecond (~30m) DEM and error map, with the spacing supported by the full-resolution SRTM data. We will reprocess the entire SRTM dataset from raw sensor measurements with validated improvements to the original processing algorithms. We will incorporate GLAS data to remove artifacts at the optimal step in the SRTM processing chain. We will merge the improved SRTM strip DEMs, refined ASTER and GDEM V2 DEMs, and GLAS data using the SRTM mosaic software to create a seamless, void-filled NASADEM. In addition, we will provide several new data layers not publicly available from the original SRTM processing: interferometric coherence, radar backscatter, radar incidence angle to enable radiometric correction, and a radar backscatter image mosaic to be used as a layer for global classification of land cover and land use. This work leverages an FY12 $1M investment from NASA to make several improvements to the original algorithms. We validated our results with the original SRTM products and ancillary elevation information at a few study sites. Our approach will merge the reprocessed SRTM data with the DEM void-filling strategy developed during NASA's Making Earth System Data Records for Use in Research Environments (MEaSUREs) 2006 project, "The Definitive Merged Global Digital Topographic Data Set" of Co-Investigator Kobrick. NASADEM is a significant improvement over the available three-arcsecond SRTM DEM primarily because it will provide a global DEM and associated products at one-arcsecond spacing. ASTER GDEM is available at one-arcsecond spacing but has true spatial resolution generally inferior to SRTM one-arcsecond data and has much greater noise problems that are particularly severe in tropical (cloudy) areas. At one-arcsecond, NASADEM will be superior to GDEM across almost all SRTM coverage areas, but will integrate GDEM and other data to extend the coverage. Meanwhile, DEMs from the Deutsches Zentrum für Luft- und Raumfahrt Tandem-X mission are being developed as part of a public-private partnership. However, these data must be purchased and are not redistributable. NASADEM will be the finest resolution, global, freely-available DEM products for the foreseeable future. data page: https://lpdaac.usgs.gov/products/nasadem_hgtv001/ news links: https://earthdata.nasa.gov/esds/competitive-programs/measures/nasadem
    3 points
  7. It was just announced that June was the 3rd hottest on record, Johns Hopkins put the number of COVID-19 cases at 13-million, and over 300,000 sq km of protected areas were created last month. These are all indicators of the planet’s vitality, but traditionally you’d need to bookmark three different websites to keep track of these and other metrics. In partnership with Microsoft, National Geographic, and the United Nations Sustainable Development Solutions Network, Esri is gathering these and other topics into the ArcGIS Living Atlas Indicators of the Planet (Beta). Leveraging the near real-time information already contributed to Living Atlas by organizations such as NOAA, UN Environment Programme, and US Geological Survey, ArcGIS Living Atlas Indicators of the Planet draws upon authoritative sources for the latest updates on 18 topics, with more being developed. In addition to the summary statistics provided by the GeoCards, there are a series of maps and resources to better understand each issue and learn how to integrate timely data into decision making, along with stories on progress towards building a sustainable planet. ArcGIS Living Atlas Indicators of the Planet was developed using ArcGIS Experience Builder and is in its Beta release while additional capabilities are being implemented. This Experience Builder template can be customized for your own topics of interest. All of the underlying layers, maps, and apps are available from this Content Group. link: https://experience.arcgis.com/experience/003f05cc447b46dc8818640c38b69b83
    3 points
  8. A Long March-2D carrier rocket, carrying the Gaofen-9 04 satellite, is launched from the Jiuquan Satellite Launch Center in northwest China, Aug. 6, 2020. China successfully launched a new optical remote-sensing satellite from the Jiuquan Satellite Launch Center at 12:01 p.m. Thursday (Beijing Time). (Photo by Wang Jiangbo/Xinhua) JIUQUAN, Aug. 6 (Xinhua) -- China successfully launched a new optical remote-sensing satellite from the Jiuquan Satellite Launch Center in northwest China at 12:01 p.m. Thursday (Beijing Time). The satellite, Gaofen-9 04, was sent into orbit by a Long March-2D carrier rocket. It has a resolution up to the sub-meter level. The satellite will be mainly used for land surveys, city planning, land right confirmation, road network design, crop yield estimation and disaster prevention and mitigation. It will also provide information for the development of the Belt and Road Initiative. The same carrier rocket also sent the Gravity & Atmosphere Scientific Satellite (Q-SAT) into space. The Q-SAT satellite, developed by Tsinghua University, will help with the satellite system design approach and orbital atmospheric density measurement, among others. Thursday's launch was the 342nd mission of the Long March rocket series. source: http://www.xinhuanet.com/english/2020-08/06/c_139269788.htm
    3 points
  9. A new set of 10 ArcGIS Pro lessons empowers GIS practitioners, instructors, and students with essential skills to find, acquire, format, and analyze public domain spatial data to make decisions. Described in this video, this set was created for 3 reasons: (1) to provide a set of analytical lessons that can be immediately used, (2) to update the original 10 lessons created by my colleague Jill Clark and I to provide a practical component to our Esri Press book The GIS Guide to Public Domain Data, and (3) to demonstrate how ArcGIS Desktop (ArcMap) lessons can be converted to Pro and to reflect upon that process. The activities can be found here. This essay is mirrored on the Esri GeoNet education blog and the reflections are below and in this video. Summary of Lessons: Can be used in full, in part, or modified to suit your own needs. 10 lessons. 64 work packages. A “work package” is a set of tasks focused on solving a specific problem. 370 guided steps. 29 to 42 hours of hands-on immersion. Over 600 pages of content. 100 skills are fostered, covering GIS tools and methods, working with data, and communication. 40 data sources are used, covering 85 different data layers. Themes covered: climate, business, population, fire, floods, hurricanes, land use, sustainability, ecotourism, invasive species, oil spills, volcanoes, earthquakes, agriculture. Areas covered: The Globe, and also: Brazil, New Zealand, the Great Lakes of the USA, Canada, the Gulf of Mexico, Iceland, the Caribbean Sea, Kenya, Orange County California, Nebraska, Colorado, and Texas USA. Aimed at university-level graduate and university or community college undergraduate student. Some GIS experience is very helpful, though not absolutely required. Still, my advice is not to use these lessons for students’ first exposure to GIS, but rather, in an intermediate or advanced setting. How to access the lessons: The ideal way to work through the lessons is in a Learn Path which bundle the readings of the book’s chapters, selected blog essays, and the hands-on activities.. The Learn Path is split into 3 parts, as follows: Solving Problems with GIS and public domain geospatial data 1 of 3: Learn how to find, evaluate, and analyze data to solve location-based problems through this set of 10 chapters and short essay readings, and 10 hands-on lessons: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path/ Solving Problems with GIS and public domain geospatial data 2 of 3: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path-2/ Solving Problems with GIS and public domain geospatial data 3 of 3: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path-3/ The Learn Paths allow for content to be worked through in sequence, as shown below: You can also access the lessons by accessing this gallery in ArcGIS Online, shown below. If you would like to modify the lessons for your own use, feel free! This is why the lessons have been provided in a zipped bundle as PDF files here and as MS Word DOCX files here. This video provides an overview. source: https://spatialreserves.wordpress.com/2020/05/14/10-new-arcgis-pro-lesson-activities-learn-paths-and-migration-reflections/
    3 points
  10. the satellites from planet can now take imagery at 50cm, they changed their orbit in order to achieve better GSD SKYSAT IMAGERY NOW AVAILABLE Bring agility to your organization with the latest advancements in high-resolution SkySat imagery, available today. Make targeted decisions in ever-changing operational contexts with improved 50 cm spatial resolution and more transparency in the ordering process with the new Tasking Dashboard.
    3 points
  11. Interesting application of WebGIS to plot Dinosaur database, and you can search how is your place in the past on the interactive globe Map. Welcome to the internet's largest dinosaur database. Check out a random dinosaur, search for one below, or look at our interactive globe of ancient Earth! Whether you are a kid, student, or teacher, you'll find a rich set of dinosaur names, pictures, and facts here. This site is built with PaleoDB, a scientific database assembled by hundreds of paleontologists over the past two decades. check this interactive webgis apps: https://dinosaurpictures.org/ancient-earth#170 official link: https://dinosaurpictures.org/
    3 points
  12. link: https://press.anu.edu.au/publications/new-releases
    3 points
  13. Interesting video on How Tos: WebOpenDroneMap is a friendly Graphical User Interfase (GUI) of OpenDroneMap. It enhances the capabilities of OpenDroneMap by providing a easy tool for processing drone imagery with bottoms, process status bars, and a new way to store images. WebODM allows to work by projects, so the user can create different projects and process the related images. As a whole, WebODM in Windows is a implementation of PostgresSQL, Node, Django and OpenDroneMap and Docker. The software instalation requires 6gb of disk space plus Docker. It seem huge but it is the only way to process drone imagery in Windows using just open source software. We definitely see a huge potential of WebODM for the image processing, therefore we have done this tutorial for the installation and we will post more tutorial for the application of WebODM with drone images. For this tutorial you need Docker Toolbox installed on your computer. You can follow this tutorial to get Docker on your pc: https://www.hatarilabs.com/ih-en/tutorial-installing-docker You can visit the WebODM site on GitHub: https://github.com/OpenDroneMap/WebODM Videos The tutorial was split in three short videos. Part 1 https://www.youtube.com/watch?v=AsMSoWAToxE Part 2 https://www.youtube.com/watch?v=8GKx3fz0qgE Part 3 https://www.youtube.com/watch?v=eCZFzaXyMmA
    3 points
  14. 7th International Conference on Computer Science and Information Technology (CoSIT 2020) January 25 ~ 26, 2020, Zurich, Switzerland https://cosit2020.org/ Scope & Topics 7th International Conference on Computer Science and Information Technology (CoSIT 2020) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field. Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe · Geographical Information Systems/ Global Navigation Satellite Systems (GIS/GNSS) Paper Submission Authors are invited to submit papers through the conference Submission system. Here’s where you can reach us : [email protected] or [email protected]
    3 points
  15. The first thing to do before mapping is to set up the camera parameters. Before to set up camera parameters, recommended resetting the all parameters on camera first. To set camera parameters manually need to set to manual mode. Image quality: Extra fine Shutter speed: to remove blur from photo shutter speed should be set for higher value. 1200–1600 is recommended. Higher the shutter speed reduce image quality . if there is blur in the image increase shutter speed ISO: lower the ISO higher image quality. ISO between 160–300 is recommended. if there is no blur but image quality is low, reduce ISO. Focus: Recommended to set the focus manually on the ground before a flight. Direct camera to an object which is far, and slightly increase the focus, you will see on camera screen that image sharpness changes by changing the value. Set the image sharpness at highest. (slide the slider close to infinity point on the screen you will see the how image sharpness changes by sliding) White balance: recommended to set to auto. On surveying mission Sidelap, Overlap, Buffer have to be set higher to get better quality surveying result. First set the RESOLUTION which you would like to get for your surveying project. When you change resolution it changes flight altitude and also effects the coverage in a single flight. Overlap: 70% This will increase the number of photos taken during each flight line. The camera should be capable to capture faster. Sidelap: recommended 70% Flying with higher side-lap between each line of the flight is a way to get more matches in the imagery, but it also reduces the coverage in a single flight Buffer: 12% Buffer increases the flight plane to get more images from borders. It will improve the quality of the map source: https://dronee.aero/blogs/dronee-pilot-blog/few-things-to-set-correctly-to-get-high-quality-surveying-results
    3 points
  16. The GeoforGood Summit 2019 drew its curtains close on 19 Sep 2019 and as a first time attendee, I was amazed to see the number of new developments announced at the summit. The summit — being a first of its kind — combined the user summit and the developers summit into one to let users benefit from the knowledge of new tools and developers understand the needs of the user. Since my primary focus was on large scale geospatial modeling, I attended the workshops and breakout sessions related to Google Earth Engine only. With that, let’s look at 3 new exciting developments to hit Earth Engine Updated documentation on machine learning Documentation really? Yes! As an amateur Earth Engine user myself, my number one complaint of the tool has been its abysmal quality of documentation spread between its app developers site, Google Earth’s blog, and their stack exchange answers. So any updates to the documentation is welcome. I am glad that the documentation has been updated to help the ever-exploding user base of geospatial data scientists interested in implementing machine learning and deep learning models. The documentation comes with its own example Colab notebooks. The Example notebooks include supervised classification, unsupervised classification, dense neural network, convolutional neural network, and deeplearning on Google Cloud. I found that these notebooks were incredibly useful to me to get started as there are quite a few non-trivial data type conversions ( int to float32 and so on) in the process flow. Earth Engine and AI Platform Integration Nick Clinton and Chris Brown jointly announced the much overdue Earth Engine + Google AI Platform integration. Until now, users were essentially limited to running small jobs on Google Colab’s virtual machine (VM) and hoping that the connection with the VM doesn’t time out (which usually lasts for about 4 hours). Other limitations include lack of any task monitoring or queuing capabilities. Not anymore! The new ee.Model() package let’s users communicate with a Google Cloud server that they can spin up based on their own needs. Needless to say, this is a HUGE improvement over the previous primitive deep learning support provided on the VM. Although it was free, one could simply not train, validate, predict, and deploy any model larger than a few layers. It had to be done separately on the Google AI Platform once the .TFRecord objects were created in their Google bucket. With this cloud integration, that task has been simplified tremendously by letting users run and test their models right from the Colab environment. The ee.Model() class comes with some useful functions such as ee.Model.fromAIPlatformPredictor() to make predictions on Earth Engine data directly from your model sitting on Google Cloud. Lastly, since your model now sits in the AI Platform, you can cheat and use your own models trained offline to predict on Earth Engine data and make maps of its output. Note that your model must be saved using tf.contrib.saved_model format if you wish to do so. The popular Keras function model.save_model('model.h5') is not compatible with ee.Model(). Moving forward, it seems like the team plans to stick to the Colab Python IDE for all deep learning applications. However, it’s not a death blow for the loved javascript code editor. At the summit, I saw that participants still preferred the javascript code editor for their non-neural based machine learning work (like support vector machines, random forests etc.). Being a python lover myself, I too go to the code editor for quick visualizations and for Earth Engine Apps! I did not get to try out the new ee.Model() package at the summit but Nick Clinton demonstrated a notebook where a simple working example has been hosted to help us learn the function calls. Some kinks still remain in the development— like limiting a convolution kernel to only 144 pixels wide during prediction because of “the way earth engine communicates with cloud platform” — but he assured us that it will be fixed soon. Overall, I am excited about the integration because Earth Engine is now a real alternative for my geospatial computing work. And with the Earth Engine team promising more new functions in the ee.Model() class, I wonder if companies and labs around the world will start migrating their modeling work to Earth Engine. Cooler Visualizations! Matt Hancher and Tyler Erickson displayed some new functionality related to visualizations and I found that it made it vastly simpler to make animated visuals. With ee.ImageCollection.getVideoThumbURL() function, you can create your own animated gifs within a few seconds! I tried it on a bunch of datasets and the speed of creating the gifs was truly impressive. Say bye to exporting each iteration of a video to your drive because these gifs appear right at the console using the print() command! Shown above is an example of global temperature forecast by time from the ‘NOAA/GFS0P25’ dataset. The code for making the gif can be found here. The animation is based on the example shown in the original blog post by Michael DeWitt and I referred to this gif-making tutorial on the developers page to make it. I did not get to cover all the new features and functionality introduced at the summit. For that, be on the lookout for event highlights on Google Earth’s blog. Meanwhile, you can check out the session resources from the summit for presentations and notebooks on topics that you are interested in. Presentation and resources Published in Medium
    3 points
  17. found this interesting tutorial : For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us. Initially, the results were pretty bad. With PostgreSQL 10, it was possible to force some parallel queries by jimmying with global cost parameters, but nothing would execute in parallel out of the box. With PostgreSQL 11, we got support for parallel aggregates, and those tended to parallelize in PostGIS right out of the box. However, parallel scans still required some manual alterations to PostGIS function costs, and parallel joins were basically impossible to force no matter what knobs you turned. With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of: Parallel sequence scans, Parallel aggregates, and Parallel joins!! TL;DR: PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions. What Changed With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour. For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively. Setup In order to run these tests yourself, you will need: PostgreSQL 12 PostGIS 3.0 You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best. The setup instructions show where to download the Canadian polling division data used for the testing: pd a table of ~70K polygons pts a table of ~70K points pts_10 a table of ~700K points pts_100 a table of ~7M points We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes. When max_parallel_workers_per_gather is set to 0, parallel plans are not an option. max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2. Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker. show max_worker_processes; show max_parallel_workers; show max_parallel_workers_per_gather; Aggregates Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT Sum(ST_Area(geom)) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. Scans The simplest spatial parallel scan adds a spatial function to the target list or filter clause. SET max_parallel_workers = 8; SET max_parallel_workers_per_gather = 4; EXPLAIN ANALYZE SELECT ST_Area(geom) FROM pd; Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11. Gather (cost=1000.00..27361.20 rows=69534 width=8) Workers Planned: 3 -> Parallel Seq Scan on pd (cost=0.00..19407.80 rows=22430 width=8) Joins Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get: SET max_parallel_workers_per_gather = 4; EXPLAIN SELECT * FROM pd JOIN pts_100 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..837378459.28 rows=5322553884 width=2579) Workers Planned: 4 -> Nested Loop (cost=0.28..305122070.88 rows=1330638471 width=2579) -> Parallel Seq Scan on pts_100 pts (cost=0.00..75328.50 rows=1738350 width=40) -> Index Scan using pd_geom_idx on pd (cost=0.28..175.41 rows=7 width=2539) Index Cond: (geom && pts.geom) Filter: st_intersects(geom, pts.geom) The only quirk in this plan is that the nested loop join is being driven by the pts_100 table, which has 10 times the number of records as the pd table. The plan for a query against the pt_10 table also returns a parallel plan, but with pd as the driving table. EXPLAIN SELECT * FROM pd JOIN pts_10 pts ON ST_Intersects(pd.geom, pts.geom); Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11 Gather (cost=1000.28..85251180.90 rows=459202963 width=2579) Workers Planned: 3 -> Nested Loop (cost=0.29..39329884.60 rows=148129988 width=2579) -> Parallel Seq Scan on pd (cost=0.00..13800.30 rows=22430 width=2539) -> Index Scan using pts_10_gix on pts_10 pts (cost=0.29..1752.13 rows=70 width=40) Index Cond: (geom && pd.geom) Filter: st_intersects(pd.geom, geom) source: http://blog.cleverelephant.ca/2019/05/parallel-postgis-4.html
    3 points
  18. Hello everyone ! This is a quick Python code which I wrote to batch download and preprocess Sentinel-1 images of a given time. Sentinel images have very good resolution and makes it obvious that they are huge in size. Since I didn’t want to waste all day preparing them for my research, I decided to write this code which runs all night and gives a nice image-set in following morning. import os import datetime import gc import glob import snappy from sentinelsat import SentinelAPI, geojson_to_wkt, read_geojson from snappy import ProductIO class sentinel1_download_preprocess(): def __init__(self, input_dir, date_1, date_2, query_style, footprint, lat=24.84, lon=90.43, download=False): self.input_dir = input_dir self.date_start = datetime.datetime.strptime(date_1, "%d%b%Y") self.date_end = datetime.datetime.strptime(date_2, "%d%b%Y") self.query_style = query_style self.footprint = geojson_to_wkt(read_geojson(footprint)) self.lat = lat self.lon = lon self.download = download # configurations self.api = SentinelAPI('scihub_username', 'scihub_passwd', 'https://scihub.copernicus.eu/dhus') self.producttype = 'GRD' # SLC, GRD, OCN self.orbitdirection = 'ASCENDING' # ASCENDING, DESCENDING self.sensoroperationalmode = 'IW' # SM, IW, EW, WV def sentinel1_download(self): global download_candidate if self.query_style == 'coordinate': download_candidate = self.api.query('POINT({0} {1})'.format(self.lon, self.lat), date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) elif self.query_style == 'footprint': download_candidate = self.api.query(self.footprint, date=(self.date_start, self.date_end), producttype=self.producttype, orbitdirection=self.orbitdirection, sensoroperationalmode=self.sensoroperationalmode) else: print("Define query attribute") title_found_sum = 0 for key, value in download_candidate.items(): for k, v in value.items(): if k == 'title': title_info = v title_found_sum += 1 elif k == 'size': print("title: " + title_info + " | " + v) print("Total found " + str(title_found_sum) + " title of " + str(self.api.get_products_size(download_candidate)) + " GB") os.chdir(self.input_dir) if self.download: if glob.glob(input_dir + "*.zip") not in [value for value in download_candidate.items()]: self.api.download_all(download_candidate) print("Nothing to download") else: print("Escaping download") # proceed processing after download is complete self.sentinel1_preprocess() def sentinel1_preprocess(self): # Get snappy Operators snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis() # HashMap Key-Value pairs HashMap = snappy.jpy.get_type('java.util.HashMap') for folder in glob.glob(self.input_dir + "\*"): gc.enable() if folder.endswith(".zip"): timestamp = folder.split("_")[5] sentinel_image = ProductIO.readProduct(folder) if self.date_start <= datetime.datetime.strptime(timestamp[:8], "%Y%m%d") <= self.date_end: # add orbit file self.sentinel1_preprocess_orbit_file(timestamp, sentinel_image, HashMap) # remove border noise self.sentinel1_preprocess_border_noise(timestamp, HashMap) # remove thermal noise self.sentinel1_preprocess_thermal_noise_removal(timestamp, HashMap) # calibrate image to output to Sigma and dB self.sentinel1_preprocess_calibration(timestamp, HashMap) # TOPSAR Deburst for SLC images if self.producttype == 'SLC': self.sentinel1_preprocess_topsar_deburst_SLC(timestamp, HashMap) # multilook self.sentinel1_preprocess_multilook(timestamp, HashMap) # subset using a WKT of the study area self.sentinel1_preprocess_subset(timestamp, HashMap) # finally terrain correction, can use local data but went for the default self.sentinel1_preprocess_terrain_correction(timestamp, HashMap) # break # try this if you want to check the result one by one def sentinel1_preprocess_orbit_file(self, timestamp, sentinel_image, HashMap): start_time_processing = datetime.datetime.now() orb = self.input_dir + "\\orb_" + timestamp if not os.path.isfile(orb + ".dim"): parameters = HashMap() orbit_param = snappy.GPF.createProduct("Apply-Orbit-File", parameters, sentinel_image) ProductIO.writeProduct(orbit_param, orb, 'BEAM-DIMAP') # BEAM-DIMAP, GeoTIFF-BigTiff print("orbit file added: " + orb + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + orb) def sentinel1_preprocess_border_noise(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() border = self.input_dir + "\\bordr_" + timestamp if not os.path.isfile(border + ".dim"): parameters = HashMap() border_param = snappy.GPF.createProduct("Remove-GRD-Border-Noise", parameters, ProductIO.readProduct(self.input_dir + "\\orb_" + timestamp + ".dim")) ProductIO.writeProduct(border_param, border, 'BEAM-DIMAP') print("border noise removed: " + border + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + border) def sentinel1_preprocess_thermal_noise_removal(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() thrm = self.input_dir + "\\thrm_" + timestamp if not os.path.isfile(thrm + ".dim"): parameters = HashMap() thrm_param = snappy.GPF.createProduct("ThermalNoiseRemoval", parameters, ProductIO.readProduct(self.input_dir + "\\bordr_" + timestamp + ".dim")) ProductIO.writeProduct(thrm_param, thrm, 'BEAM-DIMAP') print("thermal noise removed: " + thrm + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + thrm) def sentinel1_preprocess_calibration(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() calib = self.input_dir + "\\calib_" + timestamp if not os.path.isfile(calib + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) calib_param = snappy.GPF.createProduct("Calibration", parameters, ProductIO.readProduct(self.input_dir + "\\thrm_" + timestamp + ".dim")) ProductIO.writeProduct(calib_param, calib, 'BEAM-DIMAP') print("calibration complete: " + calib + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + calib) def sentinel1_preprocess_topsar_deburst_SLC(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() deburst = self.input_dir + "\\dburs_" + timestamp if not os.path.isfile(deburst): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) deburst_param = snappy.GPF.createProduct("TOPSAR-Deburst", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(deburst_param, deburst, 'BEAM-DIMAP') print("deburst complete: " + deburst + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + deburst) def sentinel1_preprocess_multilook(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() multi = self.input_dir + "\\multi_" + timestamp if not os.path.isfile(multi + ".dim"): parameters = HashMap() parameters.put('outputSigmaBand', True) parameters.put('outputImageScaleInDb', False) multi_param = snappy.GPF.createProduct("Multilook", parameters, ProductIO.readProduct(self.input_dir + "\\calib_" + timestamp + ".dim")) ProductIO.writeProduct(multi_param, multi, 'BEAM-DIMAP') print("multilook complete: " + multi + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + multi) def sentinel1_preprocess_subset(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() subset = self.input_dir + "\\subset_" + timestamp if not os.path.isfile(subset + ".dim"): WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader') # converting shapefile to GEOJSON and WKT is easy with any free online tool wkt = "POLYGON((92.330290184197 20.5906091141114,89.1246637610338 21.6316051481971," \ "89.0330319081811 21.7802436586492,88.0086282580443 24.6678836192818,88.0857830091018 " \ "25.9156771178278,88.1771488779853 26.1480664053835,88.3759125970998 26.5942658997298," \ "88.3876586919721 26.6120432770312,88.4105534167129 26.6345128356038,89.6787084683935 " \ "26.2383305017275,92.348481691233 25.073636976939,92.4252199249342 25.0296592837972," \ "92.487261172615 24.9472465376954,92.4967290851295 24.902213855393,92.6799861774377 " \ "21.2972058618174,92.6799346581579 21.2853347419811,92.330290184197 20.5906091141114))" geom = WKTReader().read(wkt) parameters = HashMap() parameters.put('geoRegion', geom) subset_param = snappy.GPF.createProduct("Subset", parameters, ProductIO.readProduct(self.input_dir + "\\multi_" + timestamp + ".dim")) ProductIO.writeProduct(subset_param, subset, 'BEAM-DIMAP') print("subset complete: " + subset + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + subset) def sentinel1_preprocess_terrain_correction(self, timestamp, HashMap): start_time_processing = datetime.datetime.now() terr = self.input_dir + "\\terr_" + timestamp if not os.path.isfile(terr + ".dim"): parameters = HashMap() # parameters.put('demResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('imgResamplingMethod', 'NEAREST_NEIGHBOUR') # parameters.put('pixelSpacingInMeter', 10.0) terr_param = snappy.GPF.createProduct("Terrain-Correction", parameters, ProductIO.readProduct(self.input_dir + "\\subset_" + timestamp + ".dim")) ProductIO.writeProduct(terr_param, terr, 'BEAM-DIMAP') print("terrain corrected: " + terr + " | took: " + str(datetime.datetime.now() - start_time_processing).split('.', 2)[0]) else: print("file exists - " + terr) input_dir = "path_to_project_folder\Sentinel_1" start_date = '01Mar2019' end_date = '10Mar2019' query_style = 'footprint' # 'footprint' to use a GEOJSON, 'coordinate' to use a lat-lon footprint = 'path_to_project_folder\bd_bbox.geojson' lat = 26.23 lon = 88.56 sar = sentinel1_download_preprocess(input_dir, start_date, end_date, query_style, footprint, lat, lon, True) # proceed to download by setting 'True', default is 'False' sar.sentinel1_download() The geojson file is created from a very generalised shapefile of Bangladesh by using ArcGIS Pro. There are a lot of free online tools to convert shapefile to geojson and WKT. Notice that the code will skip download if the file is already there but will keep the processing on, so comment out line 197 when necessary. Updated the code almost completely. The steps of processing raw files of Sentinel-1 used here are not the most generic way, note that there are no authentic way for this. Since different research require different steps to prepare raw data, you will need to follow yours. Also published at clubgis.
    3 points
  19. News release April 1, 2019, Saint-Hubert, Quebec – The Canadian Space Agency and the Canada Centre for Mapping and Earth Observation are making RADARSAT-1 synthetic aperture radar images of Earth available to researchers, industry and the public at no cost. The 36,500 images are available through the Government of Canada's Earth Observation Data Management System. The RADARSAT-1 dataset is valuable for testing and developing techniques to reveal patterns, trends and associations that researchers may have missed when RADARSAT-1 was in operation. Access to these images will allow Canadians to make comparisons over time, for example, of sea ice cover, forest growth or deforestation, seasonal changes and the effects of climate change, particularly in Canada's North. This image release initiative is part of Canada's Open Government efforts to encourage novel Big Data Analytic and Data Mining activities by users. Canada's new Space Strategy places priority on acquiring and using space-based data to support science excellence, innovation and economic growth. Quick facts The RADARSAT Constellation Mission, scheduled for launch in May 2019, builds on the legacy of RADARSAT-1 and RADARSAT-2, and on Canada's expertise and leadership in Earth observation from space. RADARSAT-1 launched in November 1995. It operated for 17 years, well over its five-year life expectancy, during which it orbited Earth 90,828 times, travelling over 2 billion kilometres. It was Canada's first Earth observation satellite. RADARSAT-1 images supported relief operations in 244 disaster events. RADARSAT-2 launched in December 2007 and is still operational today. This project represents a unique collaboration between government and industry. MDA, a Maxar company, owns and operates the satellite and ground segment. The Canadian Space Agency helped to fund the construction and launch of the satellite. It recovers this investment through the supply of RADARSAT-2 data to the Government of Canada during the lifetime of the mission. Users can download these images through the Earth Observation Data Management System of the Canada Centre for Mapping and Earth Observation, a division of Natural Resources Canada (NRCan). NRCan is responsible for the long-term archiving and distribution of the images as well as downlinking of satellite data at its ground stations. source: https://www.canada.ca/en/space-agency/news/2019/03/open-data-over-36000-historical-radarsat-1-satellite-images-of-the-earth-now-available-to-the-public.html
    3 points
  20. premium web application for ArcGIS Enterprise 10.7 that provides users with tools and capabilities in a project-based environment that streamlines image analysis and structure observation management. Interested in working with imagery in a modern, web-based experience? Here’s a look at some of the features ArcGIS Excalibur 1.0 has to offer: Search for Imagery ArcGIS Excalibur makes it easy to search and discover imagery available to you within your organization through a number of experiences. You can connect directly to an imagery layer, an image service URL, or even through the imagery catalog search. The imagery catalog search allows you to quickly search for imagery layers over areas of interest to discover and queue images for further use. Work with imagery Once you have located the imagery of interest, you can easily connect to the imagery exploitation canvas where you can utilize a wide variety of tools to begin working with your imagery. The imagery exploitation canvas allows you to view your imagery on top of a default basemap where the imagery is automatically orthorectified and aligned with the map. The exploitation canvas also enables you to simultaneously view the same image in a more focused manner as it was captured in its native perspective. Display Tools Optimizing imagery to get the most value out of each image pixel is a breeze with ArcGIS Excalibur display tools. The image display tools include image renderers, filters, the ability to change band combinations, and even apply settings like DRA and gamma. Settings to change image transparency and compression are also included. Exploitation Tools Ever need to highlight key areas of interest through mark up, labeling, and measurement? Through the mark-up tools, you can create simple graphics on top of your imagery using text and shape elements to call attention to areas of interest through outline, fill, transparency, and much more. The measurements tool allows you to measure horizontal and vertical distances, areas, and feature locations on an image. Export Tools The exploitation results saved in an image project can be easily shared using the export tools. The create presentation tool exports your current view directly to a Microsoft PowerPoint presentation, along with the metadata of the imagery. Introducing an Imagery Project ArcGIS Excalibur also introduces the concept of an imagery project to help streamline imagery workflows by leveraging the ArcGIS platform. An ArcGIS Excalibur imagery project is a dynamic way to organize resources, tools, and workflows required to complete an image-based task. An imagery project can contain geospatial reference layers and a set of tools for a focused image analysis and structured observation management workflows. Content created within imagery projects can be shared and made available to your organization to leverage in downstream analysis and shared information products.
    3 points
  21. Details geological-geophysical aspects of groundwater treatment Discusses regulatory legislations regarding groundwater utilization Serves as a reference material for scientists in geology, geophysics and environmental studies
    3 points
  22. Qualcomm Technologies and Xiaomi have verified meter-level positioning in the Xiaomi 12T Pro powered by the Snapdragon 8+ Gen 1 mobile platform, in Germany. Accuracy verification tests, including driving tests, were conducted by Qualcomm Technologies, Xiaomi, and Trimble in various scenarios such as open-sky rural roads and urban highways. The companies’ solutions demonstrated meter-level positioning variance at a 95% confidence level. This level of accuracy in a commercial smartphone is enabled through Qualcomm meter-level positioning for mobile in combination with Trimble RTX correction services. When integrated with Snapdragon mobile platforms, Trimble RTX enhances the phone’s positioning capabilities. Meter-level positioning accuracy can improve smartphone user experience in several scenarios, including mapping, driving, and other mobile applications. It enables greater accuracy when using ridesharing applications to identify pick-up locations for both driver and rider, fitness applications to track users’ movements, and in-vehicle real-time navigation applications for increased lane-level accuracy with greater map details and more accurate directions.
    2 points
  23. The images above are released by the James Webb Space Telescope (JWST) team aren’t officially ‘first light’ images from the new telescope, but in a way, it feels like they are. These stunning views provide the initial indications of just how powerful JWST will be, and just how much infrared astronomy is about to improve. The images were released following the completion of the long process to fully focus the telescope’s mirror segments. Engineers are saying JWST’s optical performance is “better than the most optimistic predictions,” and astronomers are beside themselves with excitement. The astronomers and engineers actually seem astounded how good JWST’s resolution is turning out to be. The first official image of JWST will be released on July 12. https://scitechdaily.com/comparing-the-incredible-webb-space-telescope-images-to-other-infrared-observatories/
    2 points
  24. The all-virtual Esri User Conference 2021 just dropped the curtain after a four-day event. Here's whats new. Everything new explained by Jack Dangermond. ArcGIS Image is a software for remote sensing over cloud. ArcGIS Velocity gets real-time data visualization maps. ArcGIS Enterprise installation using Kubernetes. More experiments with field survey. 😑 More integrated BIM for ArcGIS. Maps SDK for game developers. Cool presentation though! What's new in ArcGIS Online. What's new in ArcGIS Pro. Moreover. ArcGIS Desktop will be supported until 2026 and Pro 2.6 in due Q2 next year. AI or GeoAI will be more ubiquitous, so will 3D mapping and sensor-based real-time data processing. I am hoping that ArcGIS Online credit cost to come down and easier to purchase.
    2 points
  25. Six years ago, we compared ArcGIS vs QGIS. The response was incredible and we thank you for that. But since then, the game has changed. Yet, the players are still the same. The Omen of Open Source GIS is back with QGIS 3. It’s up against the Pioneer of Proprietary GIS, ArcGIS Pro. Buckle up. Because today, you’re going to witness a head-to-head battle between the juggernauts of GIS software. Pick your poison. Table of Contents 1. 3D 2. Interface 3. Coordinate Systems 4. Catalog 5. Editing 6. Vector Analysis 7. Remote Sensing 8. Speed 9. Tables 10. Statistics 11. Raster Analysis 12. Networks 13. ETL 14. Scripting 15. Labeling 16. Map Automation 17. Animation 18. Map Types 19. Topology 20. Interoperability 21. Geocoding 22. Symbology 23. LiDAR 24. Map Elements 25. Metadata 26. Database 27. Web Maps 28. Errors 29. Cost 30. Extras 31. Imagery 32. File Structure 33. Community 34. Emerging Tech 35. Documentation https://gisgeography.com/arcgis-pro-vs-qgis-3/
    2 points
  26. NGS has developed a new beta tool for obtaining geodetic information about a passive mark in their database. This column will highlight some features (available as of Oct. 5, 2020) that may be of interest to GNSS users. It provides all of the information about a station in a more user-friendly format. The box titled “Passive Mark Lookup Tool” is an example of the webtool. The tool provides a lot of information so I have separated the output of the tool into several boxes titled “Passive Mark Lookup Tool — A through D.” I will highlight several attributes that I believe will be very useful to users, especially users of leveling-derived and GNSS-derived orthometric heights. I’ve highlighted several attributes in the box titled “Passive Mark Lookup Tool — A” that are important to users such as published coordinates, their datum and source, Geoid18 value, GNSS Useable, and the date of last recovery. All of these values are available on a NGS datasheet but, in my opinion, this provides the information in a more user-friendly format. One calculation that the user can easily compute for marks that have been leveled to and occupied by GNSS equipment, is the difference between the published leveling-derived orthometric height and the computed GNSS-derived orthometric height. This may indicate that the mark has moved since the last time it was leveled to or that its height coordinate has been readjusted since the creation of the published geoid model. The table below provides the calculation using the data from the box titled “Passive Mark Lookup Tool — A.” The calculation [HGNSS = hGNSS — NGeoid18; Difference = HGNSS — HNAVD 88] has been described in several of my previous columns In this example, the difference between the GNSS-derived orthometric height and the Published NAVD 88 height is 6.1 cm. NGS is looking for comments on this beta webtool so if users would like this computation added to the tool, they should send a comment to NGS using the link provided on the site (This is a beta product. NGS is interested in your feedback concerning its function and usability as well as how users would like to interact with NGS datasheet information in the future. Email us at [email protected].) So, the user should ask the question, did the station move since the last time it was leveled? Another attribute that would be nice to be part of this tool is was the station used to create the hybrid geoid model. As of Oct. 5, 2020, users have to go to the Geoid18 webpage to get the information. The excel file and shapefiles provides whether the station was used to create the Geoid18 model. In the case of this example, KK1531, CHAMBERS, the mark was not used in the creation of Geoid18 so NGS felt that the station may have moved and/or the GPS on Bench Mark residual was large relative to its neighbors. See NGS’s technical report on Geoid18 for more information on the creation of Geoid18. The GPS on Bench Mark residual analysis was described in several of my previous columns (see “The differences between Geoid18 values and NAD 83, NAVD 88 values” and “NGS 2018 GPS on BMs program in support of NAPGD2022 — Part 6” for examples). The webtool provides a map depicting the location of the station, photos (if available), and previously published, superseded values of the mark. See the box titled “Passive Mark Lookup Tool — B.” https://www.gpsworld.com/wp-content/uploads/2020/10/zilkoski-beta-tool-column-image-2.jpg source: https://www.gpsworld.com/ngs-releases-beta-tool-for-obtaining-geodetic-information/
    2 points
  27. Earth is known as the “Blue Planet” due to the vast bodies of water that cover its surface. With an over 70% of our planet’s surface covered by water, ocean depths offer basins with an abundance of features, such as underwater plateaus, valleys, mountains and trenches. The average depth of the oceans and seas surrounding the continents is around 3,500 meters and parts deeper than 200 meters are called "deep sea". This visualization reveals Earth’s rich bathymetry, by featuring the ETOPO1 1-Arc Minute Global Relief Model. ETOPO1 integrates land topography and ocean bathymetry and provides complete global coverage between -90° to 90° in latitude and -180° to 180° in longitude. The visualization simulates an incremental drop of 10 meters of the water’s level on Earth’s surface. As time progresses and the oceans drain, it becomes evident that underwater mountain ranges are bigger in size and trenches are deeper in comparison to those on dry land. While water drains quickly closer to continents, it drains slowly in our planet’s deepest trenches. These trenches start to become apparent below 5,000 meters, as the majority of the oceans have been drained of water. In the Atlantic Ocean, there are two trenches that stand out. In the southern hemisphere, the South Sandwich trench is located between South America and Antarctica, while in the northern hemisphere the Puerto Rico trench in the eastern Caribbean is its deepest part. The majority of the world’s deepest trenches though are located in the Pacific Ocean. In the southern hemisphere, the Peru-Chile or Atacama trench is located off the coast of Peru and the Tonga Trench in the south-west Pacific Ocean between New Zealand and Tonga. In the northern hemisphere, the Philippines Trench is located east of the Philippines, and in the northwest Pacific Ocean we can see a range of trenches starting from the north, such as the Kuril-Kamchatka, and moving to the south all the way to Mariana’s trench that drains last. It is worth recalling that the altitude values of ETOPO1 range between 8,333 meters (topography) and -10,833 meters (bathymetry). This range of altitude values reflects the limitations of the visualization, since Challenger Deep - the Earth’s deepest point located at Mariana's trench - has been measured to a maximum depth of 10,910 meters and Mount Everest the highest peak above mean sea level is at 8,848 meters. In this visualization the vertically exaggerated by 60x ETOPO1 relief model, utilizes a gray-brown divergent colormap to separate the bathymetry from topography. The bathymetry is mapped to brownish hues (tan/shallow to brown/deep) and the dry land to greys (dark gray/low to white/high). A natural consequence of this mapping is that areas of the highest altitude are mapped to whitish hues, as they are almost always covered in snow. Furthermore, in an effort to help viewer’s eyes detect surface details that would otherwise be unnoticeable, the topography and bathymetry have been rendered with ambient occlusion - a shadowing technique that in this particular visualization darkens features and regions that present changes in altitude, such as mountains, ocean crevices and trenches. download: https://svs.gsfc.nasa.gov/vis/a000000/a004800/a004823/OceanDrain_Colorbar_1920x1080_30fps.mp4 https://svs.gsfc.nasa.gov/vis/a000000/a004800/a004823/OceanDrain_1920x1080_30fps.mp4 source: https://svs.gsfc.nasa.gov/4823
    2 points
  28. hello everyone, I'm a long time user of arcmap and already two years ago on my computer I installed arcgis pro... now I don't know for you but I'm postponing for years the real switch between versions and I'm still on arcmap... this because it seems to me that even if the expectations were very high, arcgis pro still doesn't seem to have that much declaimed performance. In particular, I'm concerned about the fact that for every little thing, it starts a geoprocessing that lasts between seconds and minutes, then for each function is a maze of menus and submenus where at the end I don't know where they are anymore. and I don't know for you, but it seems to me that "pro" is just an advertising program to convince even those who are not professionals to make maps, this even if they don't understand technically what they did.
    2 points
  29. Tracing is better in ArcGIS Pro, but Cut Polygon cannot be done in ArcGIS Pro, only in ArcMap Dekstop
    2 points
  30. what's really frustrating it's that esri still has his "standard response" every time I try to address an issue : - do you have the last version? (and of course in a big company you simply don't get the last last last patch at +1minute after the release > so this will be for them the solution nr. one. > and of course it will never change anything) .. then... The following question will be "can you send me a copy of your system configuration ?" (and of course they will find a way to say that your hardware it's "old" even if it does comply to all the requirements) - I'm pretty sure you have an installation issue.. please reinstall (so you have to get another problem with your IT department) VERY FRUSTRATING I'm in business since a while and every time those guys comes with the best solution for you I have an alarm bell going on.. * it will be a 64 bit solution (can you remember those bull****?), so better then arcmap and as always it doesnt make a little difference.. * it's 2D and 3D in one only software (but did they mention to you that you will need some extra licences? not to me). so basically.. I LOVE QGIS ! 😍😍😍
    2 points
  31. until they fix the performance issue, i dont see any advantages on ArcGIS Pro.... I still can use ArcGIS desktop for the daily task, no need to use fancy latest product
    2 points
  32. A joint NASA-USGS initiative has created the first worldwide map of the causes of change in mangrove habitats between 2000 and 2016. Mangrove trees can be found growing in the salty mud along the Earth’s tropical and subtropical coastlines. Mangroves are vital to aquatic ecosystems due to their ability to prevent soil erosion and store carbon. Mangroves also provide critical habitat to multiple marine species such as algae, barnacles, oysters, sponges, shrimp, crabs, and lobsters. Mangroves are threatened by both human and natural causes. Human activities in the form of farming and aquaculture and natural stressors such as erosion and extreme weather have both driven mangrove habitat loss. The joint study analyzed over one million Landsat images captured between 2000 and 2016 to create the first-ever global map visualizing the drivers of mangrove loss. Causes of mangrove loss were mapped at a resolution of 30 meters. Researchers found that 62% of mangrove loss during the time period studied was due to land use changes, mostly from conversion to aquaculture and agriculture. Roughly 80% of the loss was concentrated in six Southeast Asian nations: Indonesia, Myanmar, Malaysia, the Philippines, Thailand, and Vietnam. Mangrove loss due to human activities did decline 73% between 2000 and 2016. Mangrove loss due to natural events also decreased but at a lessor rate than human-led activities. Map and graphs showing global distribution of mangrove loss and its drivers. From the study: “(a) The longitudinal distribution of total mangrove loss and the relative contribution of its primary drivers. Different colors represent unique drivers of mangrove loss. (b) The latitudinal distribution of total mangrove loss and the relative contribution of its primary drivers. (c‐g) Global distribution of mangrove loss and associated drivers from 2000 to 2016 at 1°×1° resolution, with the relative contribution (percentage) of primary drivers per continent: (c) North America, (d) South America, (e) Africa, (f) Asia, (g) Australia together with Oceania.” links: https://www.mangrovelossdrivers.app/
    2 points
  33. Scientists have discovered new evidence for active volcanism next door to some of the most densely populated areas of Europe. The study crowdsourced GPS monitoring data from antennae across western Europe to track subtle movements in the Earth’s surface, thought to be caused by a rising subsurface mantle plume. The Eifel region lies roughly between the cities of Aachen, Trier and Koblenz, in west-central Germany. It is home to many ancient volcanic features, including the circular lakes known as maars. Maars are the remnants of violent volcanic eruptions, such as the one that created Laacher See, the largest lake in the area. The explosion that created the lake is thought to have occurred around 13,000 years ago. The mantle plume that fed this ancient activity is thought to still be present, extending up to 400 kilometers (km) into the earth. However, whether or not it is still active is unknown. “Most scientists had assumed that volcanic activity in the Eifel was a thing of the past,” said Corné Kreemer, lead author of the new study. “But connecting the dots, it seems clear that something is brewing underneath the heart of northwest Europe.” In the new study, the team — based at the University of Nevada, Reno and the University of California, Los Angeles — used data from thousands of commercial and state-owned GPS stations all over western Europe. The research revealed that the region’s land surface is moving upward and outward over a large area centered on the Eifel, and including Luxembourg, eastern Belgium and the southernmost province of the Netherlands, Limburg. “The Eifel area is the only region in the study where the ground motion appeared significantly greater than expected,” said Kreemer. “The results indicate that a rising plume could explain the observed patterns and rate of ground movement.” The new results complement those of a previous study in Geophysical Journal International that found seismic evidence of magma moving underneath the Laacher See. Both studies point towards the Eifel being an active volcanic system. The implication of this study is that there may not only be an increased volcanic risk, but also a long-term seismic risk in this part of Europe. The researchers urge caution, however. “This does not mean that an explosion or earthquake is imminent, or even possible again in this area. We and other scientists plan to continue monitoring the area using a variety of geophysical and geochemical techniques, to better understand and quantify any potential risks.” source: https://doi.org/10.1093/gji/ggaa227
    2 points
  34. Hi Everyone, July 13–16, 2020 | The world’s largest, virtual GIS event (FREE this year) The 2020 Esri User Conference (Esri UC) is a completely virtual event designed to give users and students an interactive, online experience with Esri and the GIS community. Participate in sessions and view presentations that offer geospatial solutions, browse the online Map Gallery, watch the Plenary Session, and much more. Registration here : https://www.esri.com/en-us/about/events/uc/overview Enjoy
    2 points
  35. Stop me if you’ve heard this before. DJI has introduced its latest enterprise powerhouse drone, the DJI Matrice 300 RTK. We learned a lot about the drone earlier this week due to a few huge leaks of specs, features, photos, and videos. But it’s worth looking at the drone again now that it’s official – and an incredible intro video. Also called the M300 RTK, this drone is an upgrade in every way over its predecessor, the M200 V2. That includes a very long flight time of 55 minutes, six-direction obstacle avoidance, and a doubled (6 pound) payload capability. That allows it to carry a range of powerful cameras, which we’ll get to in a bit. The drone is also built for weather extremes. IP45 weather sealing keeps out rain and dust. And a self-heating battery helps the drone to run in a broad range of temperatures, from -4 to 122 Fahrenheit. The DJI Matrice 300 RTK can fly up to 15 kilometers (9.3 miles) from its controller and still stream 1080p video back home. That video and other data can be protected using AES-256 encryption. The drone can also be flown by two co-pilots, with one able to take over for the other if any problem arises or a handoff scenario. A workhorse inspection drone All these capabilities are targeted to the DJI Matrice 300 RTK’s purpose as a drone for heavy-duty visual inspection and data collection work, such as surveys of power lines or railways. In fact, it incorporates many advanced camera features for the purpose. Smart inspection is a new set of features to optimize data collection. It includes live mission recording, which allows the drone to record every aspect of a flight, even camera settings. This allows workers to train a drone on an inspection mission that it will repeat again and again. With AI spot check, operators can mark the specific part of the photo, such as a transformer, that is the subject of inspection. AI algorithms compare that to what the camera sees on a future flight, so that it can frame the subject identically on every flight. An inspection drone is only as good as its cameras, and the M300 RTK offers some powerful options from DJI’s Zenmuse H20 series. The first option is a triple-camera setup. It includes a 20-megapixel, 23x zoom camera; a 12MP wide-angle camera; and a laser rangefinder that measures out to 1,200 meters (3,937 feet). The second option adds a radiometric thermal camera. TO make things simpler for operators, the drone provides a one-click capture feature that grabs videos or photos from three cameras at once, without requiring the operator to switch back and forth. Eyes and ears ready for danger With its flight time and range, the DJI Matrice 300 RTK could be flying some long, complex missions, easily beyond visual line of site (if its owner gets an FAA Part 107 waiver for that). This requires some solid safety measures. While the M200 V2 has front-mounted sensors, the M300 RTK has sensors in six directions for full view of the surroundings. The sensors can register obstacles up to 40 meters (98 feet) away. Like all new DJI drones, the M300 RTK also features the company’s AirSense technology. An ADS-B receiver picks up signals from manned aircraft that are nearby and alerts the drone pilot of their location. It’s been quite a few weeks for DJI. On April 27, it debuted its most compelling consumer drone yet, the Mavic Air 2. Now it’s showing off its latest achievement at the other end of the drone spectrum with the industrial grade Matrice 300 RTK. These two, very different drones help illustrate the depth of product that comes from the world’s biggest drone maker. And the company doesn’t show signs of slowing down, despite the COVID-19 economic crisis. Next up, we suspect, will be a revision to its semi-pro quadcopter line in the firm of a Mavic 3. It is available at DJI.It’s been quite a few weeks for DJI. On April 27, it debuted its most compelling consumer drone yet, the Mavic Air 2. Now it’s showing off its latest achievement at the other end of the drone spectrum with the industrial grade Matrice 300 RTK. These two, very different drones help illustrate the depth of product that comes from the world’s biggest drone maker. And the company doesn’t show signs of slowing down, despite the COVID-19 economic crisis. Next up, we suspect, will be a revision to its semi-pro quadcopter line in the firm of a Mavic 3. It is available at DJI. source: https://dronedj.com/2020/05/07/dji-matrice-300-rtk-drone-official/
    2 points
  36. @intertronic, thanks for your input. I found a solution that suite my case better due to the fact that we are using both version of QGIS and also because I was looking for interoperability. Therefore I have decided to use QSphere. Most probably not well known around the globe. https://qgis.projets.developpement-durable.gouv.fr/projects/qsphere GUI quiete ugly but at least is doing the job. 😉 darksabersan
    2 points
  37. I like drones but just got more interested in this,
    2 points
  38. January 3, 2020 - Recent Landsat 8 Safehold Update On December 19, 2019 at approximately 12:23 UTC, Landsat 8 experienced a spacecraft constraint which triggered entry into a Safehold. The Landsat 8 Flight Operations Team recovered the satellite from the event on December 20, 2019 (DOY 354). The spacecraft resumed nominal on-orbit operations and ground station processing on December 22, 2019 (DOY 356). Data acquired between December 22, 2019 (DOY 356) and December 31, 2019 (DOY 365) exhibit some increased radiometric striping and minor geometric distortions (see image below) in addition to the normal Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) alignment offset apparent in Real-Time tier data. Acquisitions after December 31, 2019 (DOY 365) are consistent with pre-Safehold Real-Time tier data and are suitable for remote sensing use where applicable. All acquisitions after December 22, 2019 (DOY 356) will be reprocessed to meet typical Landsat data quality standards after the next TIRS Scene Select Mirror (SSM) calibration event, scheduled for January 11, 2020. Landsat 8 Operational Land Imager acquisition on December 22, 2019 (path 148/row 044) after the spacecraft resumed nominal on-orbit operations and ground station processing. This acquisition demonstrates increased radiometric striping and minor geometric distortions observed in all data acquired between December 22, 2019 and December 31, 2019. All acquisitions after December 22, 2019 will be reprocessed on January 11, 2020 to achieve typical Landsat data quality standards. Data not acquired during the Safehold event are listed below and displayed in purple on the map (click to enlarge). Map displaying Landsat 8 scenes not acquired from Dec 19-22, 2019 Path 207 Rows 160-161 Path 223 Rows 60-178 Path 6 Rows 22-122 Path 22 Rows 18-122 Path 38 Rows 18-122 Path 54 Rows 18-214 Path 70 Rows 18-120 Path 86 Rows 24-110 Path 102 Rows 19-122 Path 118 Rows 18-185 Path 134 Rows 18-133 Path 150 Rows 18-133 Path 166 Rows 18-222 Path 182 Rows 18-131 Path 198 Rows 18-122 Path 214 Rows 34-122 Path 230 Rows 54-179 Path 13 Rows 18-122 Path 29 Rows 20-232 Path 45 Rows 18-133 After recovering from the Safehold successfully, data acquired on December 20, 2019 (DOY 354) and from most of the day on December 21, 2019 (DOY 355) were ingested into the USGS Landsat Archive and marked as "Engineering". These data are still being assessed to determine if they will be made available for download to users through all USGS Landsat data portals. source: https://www.usgs.gov/land-resources/nli/landsat/january-3-2020-recent-landsat-8-safehold-update
    2 points
  39. just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
    2 points
  40. This is an interesting topic from not quite an old webpage. I was searching for some use case of blockchain in geospatial context and found this. The contexts still challenging, but very noteworthy. What is a blockchain and how is it relevant for geospatial applications? (By Jonas Ellehauge, awesome map tools, Norway) A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Blockchain technology is already being used in some geospatial applications, as explained here. As an immutable registry for transactions of digital tokens, blockchain is suitable for geospatial applications involving data that is sensitive or a public good, autonomous devices and smart contracts. Use Cases The use cases are discussed further below. I have given a few short talks about this topic at various conferences, most recently at the international FOSS4G conference in Bonn, Germany, 2016. Public-good data Open Data is Still Centralised Data Over the past two decades, I have seen how ‘public-good’ geospatial data has generally become much easier to get hold of, having originally been very inaccessible to most people. Gradually, the software to display and process the data became cheaper or even free, but the data itself – data that people had already paid for through their taxes – remained inaccessible. Some national mapping institutions and cadastres began distributing the data via the internet, although mostly with a price tag. Only in recent years have a few countries in Europe made public map data freely accessible. In the meantime, projects like OpenStreetMap have emerged in order to meet people’s need for open data. It is hardly a surprise, then, that a myriad of new apps, mock-ups and business cases emerge in a region shortly after data is made available to the public there. Truly Public Open Data One of the reasons that this data has remained inaccessible for so long is that it is collected and distributed through a centralised organisation. A small group of people manage enormous repositories of geospatial data and can restrict or grant access to it. As I see it, this is where blockchain and related technologies like IPFS can enable people to build systems where the data is inherently public, no one controls it, anyone can access it, and anyone can review the full history of contributions to the data. Would it be free of charge to use data from such a system? Who would pay for it? I guess time will tell which business model is the most sustainable in that respect. OpenStreetMap is free to use, it is immensely popular and yet people gladly contribute to it – so who pays the cost for OSM? Bear in mind that there’s no such thing as ‘free data’. For example, the ‘free’ open data in Denmark today is paid for through taxes. So, even if it would cost a little to use the blockchain-based data, that wouldn’t be so different from now – just that no one would be able to restrict access to the data, plus the open nature of competing nodes and contributors will minimise the costs. Autonomous Devices & Apps Uber and Airbnb are examples of consumer applications that rely on geospatial data and processing. They represent a centralised approach where the middleman owns and controls the data and charges a significant fee for connecting clients and providers with each other. If such apps were replaced by distributed peer-to-peer systems, they could be cheaper and give their users full control of their data. There is already such an alternative to Uber called Arcade.City. A peer-to-peer market app like OpenBazar may also benefit from geospatial components with regards to e.g. search and logistics. Such autonomous apps may currently have to rely on third parties for their geospatial components – e.g. Google Maps, Mapbox, OpenStreetMap, etc. With access to truly publicly distributed data as described above, such apps would be even more reliable and cheaper to run. An autonomous device such as a drone or a self-driving car inherently runs an autonomous application, so these two concepts are heavily intertwined. There’s no doubt that self-navigating cars and drones will be a growing market in the near future. Uber and Tesla have big ambitions regarding cars, drones are being designed for delivery of consumer products (Amazon), and drone-based emergency response (drone defibrillator) and imaging (automatic selfie drone ‘Lily’) applications are emerging. Again, distributed peer-to-peer apps could cut out the middleman and reliance on third parties for their navigation and other geospatial components. Land Ownership What is Property? After some years in the GIS software industry, I realised that a very large part of my work revolved around cadastres/parcels and other administrative borders plus technical base maps featuring roads, buildings, etc. In view of my background in physical geography I thought that was pretty boring stuff and I dreamt about creating maps and applications that involved temperatures, wind, currents, salinity, terrain models, etc., because it felt more ‘real’. I gradually realised that something about administrative data was nagging me – as if it didn’t actually represent reality. Lately, I have taken an interest in philosophy about human interaction, voluntary association and self-ownership. It turns out that property is a moral, philosophical concept of assets acquired through voluntary transactions or homesteading. This perspective stretches at least as far back as John Locke in the 17th century. Such justly acquired property is reality, whereas law, governance services and computer code are systems that attempt to model reality. When such systems don’t fit reality, the system is wrong and should be dismissed, possibly adjusted or replaced. Land Ownership For the vast majority of people in many developing countries, there is no mapping of parcels or proof of ownership available to the actual landowners. Christiaan Lemmen, an expert on cadastres, has experience from field work to map parcels in developing countries such as Nigeria, Liberia, etc., where corruption can be a big challenge within land administration. In his experience, however, people mostly agree on who owns what in their local communities. These people often have a need for proof of identity and proof of ownership for their justly acquired land in order to generate wealth, invest in their future and prevent fraud – while they often face problems with inefficient, expensive or corrupt government services. Ideally, we could build inexpensive, reliable and easy-to-use blockchain-based systems that will enable people to map and register their land together with their neighbours – without involving any government officials, lawyers or other middlemen. Geodesic Grids It has been suggested to use geodesic grids of discrete cells to register land ownership on a blockchain. Such cells can be shaped, e.g. as squares, triangles, pentagons, hexagons, etc., and each cell has a unique identifier. In a traditional cadastral system, parcels are represented with flexible polygons, which allows users to register any possible shape of a parcel. Although a grid of discrete cells doesn’t allow such flexible polygons, it has an advantage in this case: each digital token on the blockchain (let’s call it a ‘Landcoin’) can represent one unique cell in the grid. Hence, whoever owns a particular Landcoin owns the corresponding piece of land. Owning such a Landcoin means possessing the private encryption key that controls it – which is how other cryptocurrencies work. In order to represent complex and high-resolution geometries, it is preferable to use a grid which is infinitely sub-divisible so that ever-smaller triangles, hexagons or squares, etc., can be tied together to represent any piece of land. A digital token can also be infinitely sub-divisible. For comparison, the smallest unit of a Bitcoin is currently a 100-millionth – aka a ‘Satoshi’. If needed, the core software could be upgraded to support even smaller units. What is a Blockchain? A blockchain is an immutable trustless registry of entries, hosted on an open distributed network of computers (called nodes). It is potentially safer and cheaper than traditional centralised databases, is resilient to attacks, enhances transparency and accountability and puts people in control of their own data. Safer – because no one controls all the data (known as root privilege in existing databases). Each entry has its own pair of public and private encryption keys and only the holder of the private key can unlock the entry and transfer it to someone else. Immutable – because each block of entries (added every 1-10 minutes) carries a unique hash ‘fingerprint’ of the previous block. Hence, older blocks cannot be tampered with. Cheaper – because anyone can set up a node and get paid in digital tokens (e.g. Bitcoin or Ether) for hosting a blockchain. This ensures that competition between nodes will minimise the cost of hosting it. It also saves the costs of massive security layers that otherwise apply to servers with sensitive data – this is because of the no-root-privilege security model and, with old entries being immutable, there’s little need to protect them. Resilient – because there is no single point of failure, there’s practically nothing to attack. In order to compromise a blockchain, you’d have to hack each individual user one by one in order to get hold of their private encryption keys that give access to that user’s data only. Another option is to run over 50% of the nodes, which is virtually impossible and economically impractical. Transparency and accountability – the fact that existing entries cannot be tampered with makes a blockchain a transparent source of truth and history for your application. The public nature of it makes it easy to hold people accountable for their activities. Control – the immutable and no-root-privilege character puts each user in full control of his/her own data using the private encryption keys. This leads to real peer-to-peer interaction without any middleman and without an administrator that can deny users access to their data. Trustless – because each user fully controls his/her own data, users can safely interact without knowing or trusting each other and without any trusted third parties. Smart Contracts and DAPPs A blockchain can be more than a passive registry of entries or transactions. The original Bitcoin blockchain supports limited scripting allowing for programmable transactions and smart contracts – e.g. where specified criteria must be fulfilled leading to transactions automatically taking place. Possibly the most popular alternative to Bitcoin is Ethereum, which is a multi-purpose blockchain with a so-called ‘Turing complete’ programming interface, which allows developers to create virtually any imaginable application on this platform. Such applications are referred to as decentralised autonomous applications (DAPPs) and are virtually impossible for third parties to stop or censor. [1] IFPS IPFS is a distributed file system and web protocol, which can complement or even replace HTTP. Instead of referring to files by their location on a host or IP address, it refers to files by their content. This means that when requested, IPFS will return the content from the nearest possible or even multiple computers rather than from a central server. That could be on the computer next to you, on your local network or somewhere in the neighbourhood. Jonas Ellehauge is an expert on geospatial software, GIS and web development, enthusiastic about open source, Linux and UI/UX. Ellehauge is passionate about science, philosophy, entrepreneurship, economy and communication. His background in physical geography provides extensive knowledge of spatial analyses and spatial problem solving.
    2 points
  41. multifunction casing. you can run 3d games and grating cheese for your hamburger. excelent thought apple as always LOL
    2 points
  42. We are all already familiar with GPS navigation outdoors and what wonders it does not only for our everyday life, but also for business operations. Outdoor maps, allowing for navigation via car or by foot, have long helped mankind to find even the most remote and hidden places. Increased levels of efficiency, unprecedented levels of control over operational processes, route planning, monitoring of deliveries, safety and security regulations and much more have been made possible. Some places are, however, harder to reach and navigate than others. For instance, places like big indoor areas – universities, hospitals, airports, convention centers or factories, among others. Luckily, that struggle is about to become a thing of the past. So what’s the solution for navigating through and managing complex indoor buildings? Indoor Mapping and Visualization with ArcGIS Indoors The answer is simple – indoor mapping. Indoor mapping is a revolutionary concept that visualizes an indoor venue and spatial data on a digital 2D or 3D map. Showing places, people and assets on a digital map enables solutions such as indoor positioning and navigation. These, in turn, allow for many different use cases that help companies optimize their workflows and efficiencies. Mobile Navigation and Data The idea behind this solution is the same as outdoor navigation, only instead it allows you to see routes and locate objects and people in a closed environment. As GPS signals are not available indoors, different technology solutions based on either iBeacons, WiFi or lighting are used to create indoor maps and enable positioning services. You can plan a route indoors from point A to point B with customized pins and remarks, analyze whether facilities are being used to their full potential, discover new business opportunities, evaluate user behaviors and send them real-time targeted messages based on their location, intelligently park vehicles, and the list goes on! With the help of geolocation, indoor mapping stores and provides versatile real-time data on everything that is happening indoors, including placements and conditions of assets and human movements. This allows for a common operating picture, where all stakeholders share the same level of information and insights into internal processes. Having a centralized mapping system enables effortless navigation through all the assets and keeps facility managers updated on the latest changes, which ultimately improves business efficiency. Just think how many operational insights can be received through visualizations of assets on your customized map – you can monitor and analyze the whole infrastructure and optimize the performance accordingly. How to engage your users/visitors at the right time and place? What does it take to improve security management? Are the workflow processes moving seamlessly? Answers to those and many other questions can be found in an indoor mapping solution. Interactive indoor experiences are no longer a thing of the future, they are here and now. source: https://www.esri.com/arcgis-blog/products/arcgis-indoors/mapping/what-is-indoor-mapping/
    2 points
  43. As part of ArcGIS Enterprise 10.7, we (ESRI) are thrilled to release a new capability that unlocks versatile data science tools and the limitless potential of Python in your Web GIS deployment. ArcGIS Notebooks provide users with a Jupyter notebook environment, hosted in your ArcGIS Enterprise portal and powered by the new ArcGIS Notebook Server. ArcGIS Notebooks are built to run big data analysis, deep learning models, and dynamic visualization tools. Notebooks are implemented using Docker containers – a virtualized operating system that provides an isolated “sandbox” style environment for each notebook author. The computational resources for each container can be configured by the organization – allowing the flexibility for notebook authors to get the computing resources they need, when they need it. Seamless integration with the portal ArcGIS Notebook Server is a new licensing role for ArcGIS Server. Because it works with the Docker container allocation technology to deliver a separate container for each notebook author, it requires specific installation steps to get up and running. Take a look at the ArcGIS Notebook Server install guide to see how it works. Once you’ve installed ArcGIS Notebook Server and configured it with your portal, you can create custom roles to grant notebook privileges to the members of your organization so that they can create and edit notebooks. Put Python to work for you At the core of the ArcGIS Notebook experience are Esri’s powerful Python resources: ArcPy and the ArcGIS API for Python. Alongside these are hundreds of popular Python libraries, such as TensorFlow, scikit-learn, and fast.ai. It all comes together to give you a complete Python workstation for spatial analysis, data science, deep learning, and content management. The Standard license of ArcGIS Notebook Server, which comes at no additional cost for ArcGIS Enterprise customers, bundles the Python API and nearly 300 other third-party Python libraries built-in. The Jupyter notebook environment has long been an essential medium for Python API users; with ArcGIS Notebooks, that environment is now available directly in the ArcGIS Enterprise portal. Turn analysis into action Location is the common thread that runs through almost any problem. What you buy, who your customers are, the impact that your business has on the natural world, and that the natural world has on your business are all problems of location. Traditional data science has many powerful tools and algorithms for solving problems. Spatial data science – GeoAI – also brings in spatial data, methods, and tools. GeoAI can help you create more effective models that more closely resemble problems you want to solve. Because of this, spatial data science models are better suited to model the impact of the solution you create. . Installation and getting started Esri Jupyter Notebook And those who wants their own free jupyter notebook # install miniconda and hit conda install -y jupyter 😁
    2 points
  44. It does really work. You probably missed one step or two during the process. One method you can do to check whether the file is NT or non-NT is by using GPSMapEdit, because it won't open NT format otherwise it will. Here I show you the snapshot when I try to open an NT format file in GPSMapEdit. Here is the same file with a non-NT format. OK, since I cannot edit my previous post, I am going to re-explain the procedures here. I enclose some snapshots for the clarity. Open the GMAP Tool Add the NT formatted img file/s (in my case, filename is 62320070.img) Go to Split tab and create subfiles. Click Split All. Download Garmin-GMP-extractor.exe tool and then put it in the same folder with your working files. Drag the GMP file into the Garmin GMP extractor tools. It will explod/extract the GMP fiel into five type of subfiles (.LBL, .NET, .NOD, .RGN and .TRE) Back to GMAPTool. Add those subfiles. Go to Join tab. Name the output file and directory. We can give mapset name. And the click Join all. FINALLY, the result is another img file with non-NT format. If you watch closely there is a slight difference of filesize between both files.
    2 points
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist