Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 09/27/2019 in all areas

  1. 4 points
    geemap is a Python package for interactive mapping with Google Earth Engine (GEE), which is a cloud computing platform with a multi-petabyte catalog of satellite imagery and geospatial datasets. During the past few years, GEE has become very popular in the geospatial community and it has empowered numerous environmental applications at local, regional, and global scales. GEE provides both JavaScript and Python APIs for making computational requests to the Earth Engine servers. Compared with the comprehensive documentation and interactive IDE (i.e., GEE JavaScript Code Editor) of the GEE JavaScript API, the GEE Python API lacks good documentation and functionality for visualizing results interactively. The geemap Python package is created to fill this gap. It is built upon ipyleaflet and ipywidgets, enabling GEE users to analyze and visualize Earth Engine datasets interactively with Jupyter notebooks. geemap is intended for students and researchers, who would like to utilize the Python ecosystem of diverse libraries and tools to explore Google Earth Engine. It is also designed for existing GEE users who would like to transition from the GEE JavaScript API to Python API. The automated JavaScript-to-Python conversion module of the geemap package can greatly reduce the time needed to convert existing GEE JavaScripts to Python scripts and Jupyter notebooks. For video tutorials and notebook examples, please visit https://github.com/giswqs/geemap/tree/master/examples. For complete documentation on geemap modules and methods, please visit https://geemap.readthedocs.io/en/latest/source/geemap.html. Features Below is a partial list of features available for the geemap package. Please check the examples page for notebook examples, GIF animations, and video tutorials. Automated conversion from Earth Engine JavaScripts to Python scripts and Jupyter notebooks. Displaying Earth Engine data layers for interactive mapping. Supporting Earth Engine JavaScript API-styled functions in Python, such as Map.addLayer(), Map.setCenter(), Map.centerObject(), Map.setOptions(). Creating split-panel maps with Earth Engine data. Retrieving Earth Engine data interactively using the Inspector Tool. Interactive plotting of Earth Engine data by simply clicking on the map. Converting data format between GeoJSON and Earth Engine. Using drawing tools to interact with Earth Engine data. Using shapefiles with Earth Engine without having to upload data to one's GEE account. Exporting Earth Engine FeatureCollection to other formats (i.e., shp, csv, json, kml, kmz) using only one line of code. Exporting Earth Engine Image and ImageCollection as GeoTIFF. Extracting pixels from an Earth Engine Image into a 3D numpy array. Calculating zonal statistics by group (e.g., calculating land over composition of each state/country). Adding a customized legend for Earth Engine data. Converting Earth Engine JavaScripts to Python code directly within Jupyter notebook. Adding animated text to GIF images generated from Earth Engine data. Adding colorbar and images to GIF animations generated from Earth Engine data. Creating Landsat timelapse animations with animated text using Earth Engine. Searching places and datasets from Earth Engine Data Catalog. Using timeseries inspector to visualize landscape changes over time. Exporting Earth Engine maps as HTML files and PNG images. Searching Earth Engine API documentation within Jupyter notebooks. Installation To use geemap, you must first sign up for a Google Earth Engine account. geemap is available on PyPI. To install geemap, run this command in your terminal: pip install geemap geemap is also available on conda-forge. If you have Anaconda or Miniconda installed on your computer, you can create a conda Python environment to install geemap: conda create -n gee python conda activate gee conda install -c conda-forge geemap If you have installed geemap before and want to upgrade to the latest version, you can run the following command in your terminal: pip install -U geemap If you use conda, you can update geemap to the latest version by running the following command in your terminal: conda update -c conda-forge geemap Usage Important note: A key difference between ipyleaflet and folium is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only (source). Note that Google Colab currently does not support ipyleaflet (source). Therefore, if you are using geemap with Google Colab, you should use import geemap.eefolium. If you are using geemap with binder or a local Jupyter notebook server, you can use import geemap, which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). Youtube tutorial videos GitHub page of geemap Documentation While working on a small project I found this. This is a quite new library, some features shown in the tutorial may not work as intended but overall a very good package. The tools make the code much cleaner and readable. Searching EE docs from notebook is not yet implemented. Check out the youtube channel, it's great.
  2. 3 points
    It was just announced that June was the 3rd hottest on record, Johns Hopkins put the number of COVID-19 cases at 13-million, and over 300,000 sq km of protected areas were created last month. These are all indicators of the planet’s vitality, but traditionally you’d need to bookmark three different websites to keep track of these and other metrics. In partnership with Microsoft, National Geographic, and the United Nations Sustainable Development Solutions Network, Esri is gathering these and other topics into the ArcGIS Living Atlas Indicators of the Planet (Beta). Leveraging the near real-time information already contributed to Living Atlas by organizations such as NOAA, UN Environment Programme, and US Geological Survey, ArcGIS Living Atlas Indicators of the Planet draws upon authoritative sources for the latest updates on 18 topics, with more being developed. In addition to the summary statistics provided by the GeoCards, there are a series of maps and resources to better understand each issue and learn how to integrate timely data into decision making, along with stories on progress towards building a sustainable planet. ArcGIS Living Atlas Indicators of the Planet was developed using ArcGIS Experience Builder and is in its Beta release while additional capabilities are being implemented. This Experience Builder template can be customized for your own topics of interest. All of the underlying layers, maps, and apps are available from this Content Group. link: https://experience.arcgis.com/experience/003f05cc447b46dc8818640c38b69b83
  3. 3 points
    A Long March-2D carrier rocket, carrying the Gaofen-9 04 satellite, is launched from the Jiuquan Satellite Launch Center in northwest China, Aug. 6, 2020. China successfully launched a new optical remote-sensing satellite from the Jiuquan Satellite Launch Center at 12:01 p.m. Thursday (Beijing Time). (Photo by Wang Jiangbo/Xinhua) JIUQUAN, Aug. 6 (Xinhua) -- China successfully launched a new optical remote-sensing satellite from the Jiuquan Satellite Launch Center in northwest China at 12:01 p.m. Thursday (Beijing Time). The satellite, Gaofen-9 04, was sent into orbit by a Long March-2D carrier rocket. It has a resolution up to the sub-meter level. The satellite will be mainly used for land surveys, city planning, land right confirmation, road network design, crop yield estimation and disaster prevention and mitigation. It will also provide information for the development of the Belt and Road Initiative. The same carrier rocket also sent the Gravity & Atmosphere Scientific Satellite (Q-SAT) into space. The Q-SAT satellite, developed by Tsinghua University, will help with the satellite system design approach and orbital atmospheric density measurement, among others. Thursday's launch was the 342nd mission of the Long March rocket series. source: http://www.xinhuanet.com/english/2020-08/06/c_139269788.htm
  4. 3 points
    Our objective is to provide the scientific and civil communities with a state-of-the-art global digital elevation model (DEM) derived from a combination of Shuttle Radar Topography Mission (SRTM) processing improvements, elevation control, void-filling and merging with data unavailable at the time of the original SRTM production: NASA SRTM DEMs created with processing improvements at full resolution NASA's Ice, Cloud,and land Elevation Satellite (ICESat)/Geoscience Laser Altimeter (GLAS) surface elevation measurements DEM cells derived from stereo optical methods using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data from the Terra satellite Global DEM (GDEM) ASTER products developed for NASA and the Ministry of Economy, Trade and Industry of Japan by Sensor Information Laboratory Corp National Elevation Data for US and Mexico produced by the USGS Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) developed by the USGS and the National Geospatial-Intelligence Agency (NGA) Canadian Digital Elevation Data produced by Natural Resources Canada We propose a significant modernization of the publicly- and freely-available DEM data. Accurate surface elevation information is a critical component in scientific research and commercial and military applications. The current SRTM DEM product is the most intensely downloaded dataset in NASA history. However, the original Memorandum of Understanding (MOU) between NASA and NGA has a number of restrictions and limitations; the original full resolution, one-arcsecond data are currently only available over the US and the error, backscatter and coherence layers were not released to the public. With the recent expiration of the MOU, we propose to reprocess the original SRTM raw radar data using improved algorithms and incorporating ancillary data that were unavailable during the original SRTM processing, and to produce and publicly release a void-free global one-arcsecond (~30m) DEM and error map, with the spacing supported by the full-resolution SRTM data. We will reprocess the entire SRTM dataset from raw sensor measurements with validated improvements to the original processing algorithms. We will incorporate GLAS data to remove artifacts at the optimal step in the SRTM processing chain. We will merge the improved SRTM strip DEMs, refined ASTER and GDEM V2 DEMs, and GLAS data using the SRTM mosaic software to create a seamless, void-filled NASADEM. In addition, we will provide several new data layers not publicly available from the original SRTM processing: interferometric coherence, radar backscatter, radar incidence angle to enable radiometric correction, and a radar backscatter image mosaic to be used as a layer for global classification of land cover and land use. This work leverages an FY12 $1M investment from NASA to make several improvements to the original algorithms. We validated our results with the original SRTM products and ancillary elevation information at a few study sites. Our approach will merge the reprocessed SRTM data with the DEM void-filling strategy developed during NASA's Making Earth System Data Records for Use in Research Environments (MEaSUREs) 2006 project, "The Definitive Merged Global Digital Topographic Data Set" of Co-Investigator Kobrick. NASADEM is a significant improvement over the available three-arcsecond SRTM DEM primarily because it will provide a global DEM and associated products at one-arcsecond spacing. ASTER GDEM is available at one-arcsecond spacing but has true spatial resolution generally inferior to SRTM one-arcsecond data and has much greater noise problems that are particularly severe in tropical (cloudy) areas. At one-arcsecond, NASADEM will be superior to GDEM across almost all SRTM coverage areas, but will integrate GDEM and other data to extend the coverage. Meanwhile, DEMs from the Deutsches Zentrum für Luft- und Raumfahrt Tandem-X mission are being developed as part of a public-private partnership. However, these data must be purchased and are not redistributable. NASADEM will be the finest resolution, global, freely-available DEM products for the foreseeable future. data page: https://lpdaac.usgs.gov/products/nasadem_hgtv001/ news links: https://earthdata.nasa.gov/esds/competitive-programs/measures/nasadem
  5. 3 points
    A new set of 10 ArcGIS Pro lessons empowers GIS practitioners, instructors, and students with essential skills to find, acquire, format, and analyze public domain spatial data to make decisions. Described in this video, this set was created for 3 reasons: (1) to provide a set of analytical lessons that can be immediately used, (2) to update the original 10 lessons created by my colleague Jill Clark and I to provide a practical component to our Esri Press book The GIS Guide to Public Domain Data, and (3) to demonstrate how ArcGIS Desktop (ArcMap) lessons can be converted to Pro and to reflect upon that process. The activities can be found here. This essay is mirrored on the Esri GeoNet education blog and the reflections are below and in this video. Summary of Lessons: Can be used in full, in part, or modified to suit your own needs. 10 lessons. 64 work packages. A “work package” is a set of tasks focused on solving a specific problem. 370 guided steps. 29 to 42 hours of hands-on immersion. Over 600 pages of content. 100 skills are fostered, covering GIS tools and methods, working with data, and communication. 40 data sources are used, covering 85 different data layers. Themes covered: climate, business, population, fire, floods, hurricanes, land use, sustainability, ecotourism, invasive species, oil spills, volcanoes, earthquakes, agriculture. Areas covered: The Globe, and also: Brazil, New Zealand, the Great Lakes of the USA, Canada, the Gulf of Mexico, Iceland, the Caribbean Sea, Kenya, Orange County California, Nebraska, Colorado, and Texas USA. Aimed at university-level graduate and university or community college undergraduate student. Some GIS experience is very helpful, though not absolutely required. Still, my advice is not to use these lessons for students’ first exposure to GIS, but rather, in an intermediate or advanced setting. How to access the lessons: The ideal way to work through the lessons is in a Learn Path which bundle the readings of the book’s chapters, selected blog essays, and the hands-on activities.. The Learn Path is split into 3 parts, as follows: Solving Problems with GIS and public domain geospatial data 1 of 3: Learn how to find, evaluate, and analyze data to solve location-based problems through this set of 10 chapters and short essay readings, and 10 hands-on lessons: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path/ Solving Problems with GIS and public domain geospatial data 2 of 3: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path-2/ Solving Problems with GIS and public domain geospatial data 3 of 3: https://learn.arcgis.com/en/paths/the-gis-guide-to-public-domain-data-learn-path-3/ The Learn Paths allow for content to be worked through in sequence, as shown below: You can also access the lessons by accessing this gallery in ArcGIS Online, shown below. If you would like to modify the lessons for your own use, feel free! This is why the lessons have been provided in a zipped bundle as PDF files here and as MS Word DOCX files here. This video provides an overview. source: https://spatialreserves.wordpress.com/2020/05/14/10-new-arcgis-pro-lesson-activities-learn-paths-and-migration-reflections/
  6. 3 points
    the satellites from planet can now take imagery at 50cm, they changed their orbit in order to achieve better GSD SKYSAT IMAGERY NOW AVAILABLE Bring agility to your organization with the latest advancements in high-resolution SkySat imagery, available today. Make targeted decisions in ever-changing operational contexts with improved 50 cm spatial resolution and more transparency in the ordering process with the new Tasking Dashboard.
  7. 3 points
    Interesting application of WebGIS to plot Dinosaur database, and you can search how is your place in the past on the interactive globe Map. Welcome to the internet's largest dinosaur database. Check out a random dinosaur, search for one below, or look at our interactive globe of ancient Earth! Whether you are a kid, student, or teacher, you'll find a rich set of dinosaur names, pictures, and facts here. This site is built with PaleoDB, a scientific database assembled by hundreds of paleontologists over the past two decades. check this interactive webgis apps: https://dinosaurpictures.org/ancient-earth#170 official link: https://dinosaurpictures.org/
  8. 3 points
    link: https://press.anu.edu.au/publications/new-releases
  9. 3 points
    Interesting video on How Tos: WebOpenDroneMap is a friendly Graphical User Interfase (GUI) of OpenDroneMap. It enhances the capabilities of OpenDroneMap by providing a easy tool for processing drone imagery with bottoms, process status bars, and a new way to store images. WebODM allows to work by projects, so the user can create different projects and process the related images. As a whole, WebODM in Windows is a implementation of PostgresSQL, Node, Django and OpenDroneMap and Docker. The software instalation requires 6gb of disk space plus Docker. It seem huge but it is the only way to process drone imagery in Windows using just open source software. We definitely see a huge potential of WebODM for the image processing, therefore we have done this tutorial for the installation and we will post more tutorial for the application of WebODM with drone images. For this tutorial you need Docker Toolbox installed on your computer. You can follow this tutorial to get Docker on your pc: https://www.hatarilabs.com/ih-en/tutorial-installing-docker You can visit the WebODM site on GitHub: https://github.com/OpenDroneMap/WebODM Videos The tutorial was split in three short videos. Part 1 https://www.youtube.com/watch?v=AsMSoWAToxE Part 2 https://www.youtube.com/watch?v=8GKx3fz0qgE Part 3 https://www.youtube.com/watch?v=eCZFzaXyMmA
  10. 3 points
    7th International Conference on Computer Science and Information Technology (CoSIT 2020) January 25 ~ 26, 2020, Zurich, Switzerland https://cosit2020.org/ Scope & Topics 7th International Conference on Computer Science and Information Technology (CoSIT 2020) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field. Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe · Geographical Information Systems/ Global Navigation Satellite Systems (GIS/GNSS) Paper Submission Authors are invited to submit papers through the conference Submission system. Here’s where you can reach us : [email protected] or [email protected]
  11. 3 points
    The first thing to do before mapping is to set up the camera parameters. Before to set up camera parameters, recommended resetting the all parameters on camera first. To set camera parameters manually need to set to manual mode. Image quality: Extra fine Shutter speed: to remove blur from photo shutter speed should be set for higher value. 1200–1600 is recommended. Higher the shutter speed reduce image quality . if there is blur in the image increase shutter speed ISO: lower the ISO higher image quality. ISO between 160–300 is recommended. if there is no blur but image quality is low, reduce ISO. Focus: Recommended to set the focus manually on the ground before a flight. Direct camera to an object which is far, and slightly increase the focus, you will see on camera screen that image sharpness changes by changing the value. Set the image sharpness at highest. (slide the slider close to infinity point on the screen you will see the how image sharpness changes by sliding) White balance: recommended to set to auto. On surveying mission Sidelap, Overlap, Buffer have to be set higher to get better quality surveying result. First set the RESOLUTION which you would like to get for your surveying project. When you change resolution it changes flight altitude and also effects the coverage in a single flight. Overlap: 70% This will increase the number of photos taken during each flight line. The camera should be capable to capture faster. Sidelap: recommended 70% Flying with higher side-lap between each line of the flight is a way to get more matches in the imagery, but it also reduces the coverage in a single flight Buffer: 12% Buffer increases the flight plane to get more images from borders. It will improve the quality of the map source: https://dronee.aero/blogs/dronee-pilot-blog/few-things-to-set-correctly-to-get-high-quality-surveying-results
  12. 3 points
    Details geological-geophysical aspects of groundwater treatment Discusses regulatory legislations regarding groundwater utilization Serves as a reference material for scientists in geology, geophysics and environmental studies
  13. 2 points
    hello everyone, I'm a long time user of arcmap and already two years ago on my computer I installed arcgis pro... now I don't know for you but I'm postponing for years the real switch between versions and I'm still on arcmap... this because it seems to me that even if the expectations were very high, arcgis pro still doesn't seem to have that much declaimed performance. In particular, I'm concerned about the fact that for every little thing, it starts a geoprocessing that lasts between seconds and minutes, then for each function is a maze of menus and submenus where at the end I don't know where they are anymore. and I don't know for you, but it seems to me that "pro" is just an advertising program to convince even those who are not professionals to make maps, this even if they don't understand technically what they did.
  14. 2 points
    Tracing is better in ArcGIS Pro, but Cut Polygon cannot be done in ArcGIS Pro, only in ArcMap Dekstop
  15. 2 points
    what's really frustrating it's that esri still has his "standard response" every time I try to address an issue : - do you have the last version? (and of course in a big company you simply don't get the last last last patch at +1minute after the release > so this will be for them the solution nr. one. > and of course it will never change anything) .. then... The following question will be "can you send me a copy of your system configuration ?" (and of course they will find a way to say that your hardware it's "old" even if it does comply to all the requirements) - I'm pretty sure you have an installation issue.. please reinstall (so you have to get another problem with your IT department) VERY FRUSTRATING I'm in business since a while and every time those guys comes with the best solution for you I have an alarm bell going on.. * it will be a 64 bit solution (can you remember those bull****?), so better then arcmap and as always it doesnt make a little difference.. * it's 2D and 3D in one only software (but did they mention to you that you will need some extra licences? not to me). so basically.. I LOVE QGIS ! 😍😍😍
  16. 2 points
    until they fix the performance issue, i dont see any advantages on ArcGIS Pro.... I still can use ArcGIS desktop for the daily task, no need to use fancy latest product
  17. 2 points
    A joint NASA-USGS initiative has created the first worldwide map of the causes of change in mangrove habitats between 2000 and 2016. Mangrove trees can be found growing in the salty mud along the Earth’s tropical and subtropical coastlines. Mangroves are vital to aquatic ecosystems due to their ability to prevent soil erosion and store carbon. Mangroves also provide critical habitat to multiple marine species such as algae, barnacles, oysters, sponges, shrimp, crabs, and lobsters. Mangroves are threatened by both human and natural causes. Human activities in the form of farming and aquaculture and natural stressors such as erosion and extreme weather have both driven mangrove habitat loss. The joint study analyzed over one million Landsat images captured between 2000 and 2016 to create the first-ever global map visualizing the drivers of mangrove loss. Causes of mangrove loss were mapped at a resolution of 30 meters. Researchers found that 62% of mangrove loss during the time period studied was due to land use changes, mostly from conversion to aquaculture and agriculture. Roughly 80% of the loss was concentrated in six Southeast Asian nations: Indonesia, Myanmar, Malaysia, the Philippines, Thailand, and Vietnam. Mangrove loss due to human activities did decline 73% between 2000 and 2016. Mangrove loss due to natural events also decreased but at a lessor rate than human-led activities. Map and graphs showing global distribution of mangrove loss and its drivers. From the study: “(a) The longitudinal distribution of total mangrove loss and the relative contribution of its primary drivers. Different colors represent unique drivers of mangrove loss. (b) The latitudinal distribution of total mangrove loss and the relative contribution of its primary drivers. (c‐g) Global distribution of mangrove loss and associated drivers from 2000 to 2016 at 1°×1° resolution, with the relative contribution (percentage) of primary drivers per continent: (c) North America, (d) South America, (e) Africa, (f) Asia, (g) Australia together with Oceania.” links: https://www.mangrovelossdrivers.app/
  18. 2 points
    The St. Patrick Bay ice caps on the Hazen Plateau of northeastern Ellesmere Island in Nunavut, Canada, have disappeared, according to NASA satellite imagery. National Snow and Ice Data Center (NSIDC) scientists and colleagues predicted via a 2017 paper in The Cryosphere that the ice caps would melt out completely within the next five years, and recent images from NASA's Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) have confirmed that this prediction was accurate. Mark Serreze, director of NSIDC, Distinguished Professor of Geography at the University of Colorado Boulder, and lead author on the paper, first set foot on the St. Patrick Bay ice caps in 1982 as a young graduate student. He visited the ice caps with his advisor, Ray Bradley, of the University of Massachusetts. "When I first visited those ice caps, they seemed like such a permanent fixture of the landscape," said Serreze. "To watch them die in less than 40 years just blows me away." In 2017, scientists compared ASTER satellite data from July 2015 to vertical aerial photographs taken in August of 1959. They found that between 1959 and 2015, the ice caps had been reduced to only five percent of their former area, and shrank noticeably between 2014 and 2015 in response to the especially warm summer in 2015. The ice caps are absent from ASTER images taken on July 14, 2020. The St. Patrick Bay ice caps were one-half of a group of small ice caps on the Hazen Plateau, which formed and likely attained their maximum extents during the Little Ice Age, perhaps several centuries ago. The Murray and Simmons ice caps, which make up the second half of the Hazen Plateau ice caps, are located at a higher elevation and are therefore faring better, though scientists predict that their demise is imminent as well. "We've long known that as climate change takes hold, the effects would be especially pronounced in the Arctic," said Serreze. "But the death of those two little caps that I once knew so well has made climate change very personal. All that's left are some photographs and a lot of memories." source: https://phys.org/news/2020-07-canadian-ice-caps-scientific.html
  19. 2 points
    Scientists have discovered new evidence for active volcanism next door to some of the most densely populated areas of Europe. The study crowdsourced GPS monitoring data from antennae across western Europe to track subtle movements in the Earth’s surface, thought to be caused by a rising subsurface mantle plume. The Eifel region lies roughly between the cities of Aachen, Trier and Koblenz, in west-central Germany. It is home to many ancient volcanic features, including the circular lakes known as maars. Maars are the remnants of violent volcanic eruptions, such as the one that created Laacher See, the largest lake in the area. The explosion that created the lake is thought to have occurred around 13,000 years ago. The mantle plume that fed this ancient activity is thought to still be present, extending up to 400 kilometers (km) into the earth. However, whether or not it is still active is unknown. “Most scientists had assumed that volcanic activity in the Eifel was a thing of the past,” said Corné Kreemer, lead author of the new study. “But connecting the dots, it seems clear that something is brewing underneath the heart of northwest Europe.” In the new study, the team — based at the University of Nevada, Reno and the University of California, Los Angeles — used data from thousands of commercial and state-owned GPS stations all over western Europe. The research revealed that the region’s land surface is moving upward and outward over a large area centered on the Eifel, and including Luxembourg, eastern Belgium and the southernmost province of the Netherlands, Limburg. “The Eifel area is the only region in the study where the ground motion appeared significantly greater than expected,” said Kreemer. “The results indicate that a rising plume could explain the observed patterns and rate of ground movement.” The new results complement those of a previous study in Geophysical Journal International that found seismic evidence of magma moving underneath the Laacher See. Both studies point towards the Eifel being an active volcanic system. The implication of this study is that there may not only be an increased volcanic risk, but also a long-term seismic risk in this part of Europe. The researchers urge caution, however. “This does not mean that an explosion or earthquake is imminent, or even possible again in this area. We and other scientists plan to continue monitoring the area using a variety of geophysical and geochemical techniques, to better understand and quantify any potential risks.” source: https://doi.org/10.1093/gji/ggaa227
  20. 2 points
    Hi Everyone, July 13–16, 2020 | The world’s largest, virtual GIS event (FREE this year) The 2020 Esri User Conference (Esri UC) is a completely virtual event designed to give users and students an interactive, online experience with Esri and the GIS community. Participate in sessions and view presentations that offer geospatial solutions, browse the online Map Gallery, watch the Plenary Session, and much more. Registration here : https://www.esri.com/en-us/about/events/uc/overview Enjoy
  21. 2 points
    for those like me who are not English mother tongue I recommend this site for translations (English - French - German - Italian - Spanish - Portuguese - Russian - Chinese - Japanese etc.)... fantastic and intuitive that is based on artificial intelligence https://www.deepl.com/ another interesting website https://www.linguee.com/
  22. 2 points
    Stop me if you’ve heard this before. DJI has introduced its latest enterprise powerhouse drone, the DJI Matrice 300 RTK. We learned a lot about the drone earlier this week due to a few huge leaks of specs, features, photos, and videos. But it’s worth looking at the drone again now that it’s official – and an incredible intro video. Also called the M300 RTK, this drone is an upgrade in every way over its predecessor, the M200 V2. That includes a very long flight time of 55 minutes, six-direction obstacle avoidance, and a doubled (6 pound) payload capability. That allows it to carry a range of powerful cameras, which we’ll get to in a bit. The drone is also built for weather extremes. IP45 weather sealing keeps out rain and dust. And a self-heating battery helps the drone to run in a broad range of temperatures, from -4 to 122 Fahrenheit. The DJI Matrice 300 RTK can fly up to 15 kilometers (9.3 miles) from its controller and still stream 1080p video back home. That video and other data can be protected using AES-256 encryption. The drone can also be flown by two co-pilots, with one able to take over for the other if any problem arises or a handoff scenario. A workhorse inspection drone All these capabilities are targeted to the DJI Matrice 300 RTK’s purpose as a drone for heavy-duty visual inspection and data collection work, such as surveys of power lines or railways. In fact, it incorporates many advanced camera features for the purpose. Smart inspection is a new set of features to optimize data collection. It includes live mission recording, which allows the drone to record every aspect of a flight, even camera settings. This allows workers to train a drone on an inspection mission that it will repeat again and again. With AI spot check, operators can mark the specific part of the photo, such as a transformer, that is the subject of inspection. AI algorithms compare that to what the camera sees on a future flight, so that it can frame the subject identically on every flight. An inspection drone is only as good as its cameras, and the M300 RTK offers some powerful options from DJI’s Zenmuse H20 series. The first option is a triple-camera setup. It includes a 20-megapixel, 23x zoom camera; a 12MP wide-angle camera; and a laser rangefinder that measures out to 1,200 meters (3,937 feet). The second option adds a radiometric thermal camera. TO make things simpler for operators, the drone provides a one-click capture feature that grabs videos or photos from three cameras at once, without requiring the operator to switch back and forth. Eyes and ears ready for danger With its flight time and range, the DJI Matrice 300 RTK could be flying some long, complex missions, easily beyond visual line of site (if its owner gets an FAA Part 107 waiver for that). This requires some solid safety measures. While the M200 V2 has front-mounted sensors, the M300 RTK has sensors in six directions for full view of the surroundings. The sensors can register obstacles up to 40 meters (98 feet) away. Like all new DJI drones, the M300 RTK also features the company’s AirSense technology. An ADS-B receiver picks up signals from manned aircraft that are nearby and alerts the drone pilot of their location. It’s been quite a few weeks for DJI. On April 27, it debuted its most compelling consumer drone yet, the Mavic Air 2. Now it’s showing off its latest achievement at the other end of the drone spectrum with the industrial grade Matrice 300 RTK. These two, very different drones help illustrate the depth of product that comes from the world’s biggest drone maker. And the company doesn’t show signs of slowing down, despite the COVID-19 economic crisis. Next up, we suspect, will be a revision to its semi-pro quadcopter line in the firm of a Mavic 3. It is available at DJI.It’s been quite a few weeks for DJI. On April 27, it debuted its most compelling consumer drone yet, the Mavic Air 2. Now it’s showing off its latest achievement at the other end of the drone spectrum with the industrial grade Matrice 300 RTK. These two, very different drones help illustrate the depth of product that comes from the world’s biggest drone maker. And the company doesn’t show signs of slowing down, despite the COVID-19 economic crisis. Next up, we suspect, will be a revision to its semi-pro quadcopter line in the firm of a Mavic 3. It is available at DJI. source: https://dronedj.com/2020/05/07/dji-matrice-300-rtk-drone-official/
  23. 2 points
    @intertronic, thanks for your input. I found a solution that suite my case better due to the fact that we are using both version of QGIS and also because I was looking for interoperability. Therefore I have decided to use QSphere. Most probably not well known around the globe. https://qgis.projets.developpement-durable.gouv.fr/projects/qsphere GUI quiete ugly but at least is doing the job. 😉 darksabersan
  24. 2 points
    DRONE MAKER DJI announced an update to its popular Mavic Air quadcopter today. The Mavic Air 2 will cost $799 when it ships to US buyers in late May. That's the same price as the previous Mavic Air model, so the drone stays as DJI's mid-range option between its more capable Mavic 2 and its smaller, cheaper Mavic Mini. The Mavic Air 2 is still plenty small, but the new version has put on some weight. DJI says that testing and consumer surveys suggested that most people don't mind lugging a few extra grams in exchange for a considerable upgrade in flight time and, presumably, better handling in windy conditions. Even better, thanks to a new rotor design and other aerodynamic improvements, DJI is claiming the Mavic Air 2 can remain aloft for 34 minutes—a big jump from the 21 minutes of flight time on the original Mavic Air. The Camera Eye he big news in this update is the new larger imaging sensor on the drone's camera. The Mavic Air 2's camera ships with a half-inch sensor, up from the 1 2/3-inch sensor found in the previous model. That should mean better resolution and sharper images, especially because the output specs haven't changed much. The new camera is still outputting 12-megapixel stills, but now has a bigger sensor to fill that frame with more detail. There's also a new composite image option that joins together multiple single shots into a large, 48-megapixel image. On the video side, there's some exciting news. The Mavic Air 2 is DJI's first drone to offer 4K video at 60 frames per second and 120 Mbps—previous DJI drones topped out at 30 fps when shooting in full 4K resolution. There are also slow-motion modes that slow down footage to four times slower than real life (1080p at 120 fps), or eight-times slower (1080 at 240 fps). Combine those modes with the more realistic contrast you get with the HDR video standard, and you have considerably improved video capabilities in a sub-$1,000 drone. More interesting in some ways is DJI's increasing forays into computational photography, which the company calls Smart Photo mode. Flip on Smart Photo and the Mavic Air 2 will do scene analysis, tap its machine intelligence algorithm and automatically choose between a variety of photo modes. There's a scene recognition mode where the Mavic Air 2 sets the camera up to best capture one of a variety of scenarios you're likely to encounter with drone photography, including blue skies, sunsets, snow, grass, and trees. In each case, exposure is adjusted to optimize tone and detail. The second Smart Photo mode is dubbed Hyperlight, which handles low-light situations. To judge by DJI's promo materials, this is essentially an HDR photography mode specifically optimized for low-light scenes. It purportedly cuts noise and produces more detailed images. The final smart mode is HDR, which takes seven images in rapid succession, the combines elements of each to make a final image with a higher dynamic range. One last note about the camera: The shape of the camera has changed, so if you have any lenses or other accessories for previous DJI drones, they won't attach to the Air 2. Automatic Flight for the People If you dig through older YouTube videos there's a ton of movies that play out like this: unbox new drone, head outside, take off, tree gets closer, closer, closer, black screen. Most of us just aren't that good at flying, and the learning curve can be expensive and steep. Thankfully drone companies began automating away most of what's difficult about piloting a quadcopter, and DJI is no exception. The company has added some new automated flight tricks to the Air's arsenal. DJI's Active Track has been updated to version 3.0, which brings better subject recognition algorithms and some new 3D mapping tricks to make it easier to automatically track people through a scene, keeping the camera on the subject as the drone navigates overhead to stay with them. DJI claims the Point of Interest mode—which allows you to select an object and fly around it in a big circle while the camera stays pointed at the subject—is better at tracking some of the objects that previous versions struggled with, like vehicles or even people. The most exciting new flight mode is Spotlight, which comes from DJI's high-end Inspire drone used by professional photographers and videographers to carry their DSLR cameras into the sky. Similar to the Active Track mode, Spotlight keeps the camera pointed a moving subject. But while Active Track automates the drone's flight, the new Spotlight mode allows the human pilot to retain control of the flight path for more complex shots. Finally, the range of the new Mavic Air 2 has been improved, and it can now wander an impressive six miles away from the pilot in ideal conditions. The caveat here is that you should always maintain visual contact with your drone for safety reasons. However, you aren't going to be able to see the Mavic Air 2 when it's two miles away, let alone six. Despite a dearth of competitors, DJI continues to put out new drones and improve its lineup as it progresses. The Mavic Air 2 looks like an impressive update to what was already one of our favorite drones, especially considering several features—the 60 fps 4K video and 34 minute flight time—even best those found on the more expensive Mavic 2 Pro. links: https://www.dji.com/id/mavic-air-2
  25. 2 points
    I like drones but just got more interested in this,
  26. 2 points
    Harvard Online Courses Advance your career. Pursue your passion. Keep learning. links: https://online-learning.harvard.edu/CATALOG/FREE
  27. 2 points
  28. 2 points
    Saw a similar news last month - Using Machine Learning to “Nowcast” Precipitation in High Resolution by Google. The result seemed pretty good. Here, A visualization of predictions made over the course of roughly one day. Left: The 1-hour HRRR prediction made at the top of each hour, the limit to how often HRRR provides predictions. Center: The ground truth, i.e., what we are trying to predict. Right: The predictions made by our model. Our predictions are every 2 minutes (displayed here every 15 minutes) at roughly 10 times the spatial resolution made by HRRR. Notice that we capture the general motion and general shape of the storm. The two method seem similar.
  29. 2 points
    With Huawei basically blocked from using Google services and infrastructure, the firm has taken steps to replace Google Maps on its hardware by signing a partnership with TomTom to provide maps, navigation, and traffic data to Huawei apps. Reuters reports that Huawei is entering this partnership with TomTom as the mapping tech company is based in the Netherlands — therefore side-stepping the bans on working with US firms. TomTom will provide the Chinese smartphone manufacturer with mapping, live traffic data, and software on smartphones and tablets. TomTom spokesman Remco Meerstra confirmed to Reuters that the deal had been closed some time ago but had not been made public by the company. This comes as TomTom unveiled plans to move away from making navigation hardware and will focus more heavily on offering software services — making this a substantial step for TomTom and Huawei. While TomTom doesn’t quite match the global coverage and update speed of Google Maps, having a vital portion of it filled by a dedicated navigation and mapping firm is one step that might appease potential global Huawei smartphone buyers. There is no denying the importance of Google app access outside of China but solid replacements could potentially make a huge difference — even more so if they are recognizable by Western audiences. It’s unclear when we may see TomTom pre-installed on Huawei devices but we are sure that this could be easily added by way of an OTA software update. The bigger question remains if people are prepared to switch from Google Maps to TomTom for daily navigation. resource: https://9to5google.com/2020/01/20/huawei-tomtom/
  30. 2 points
    January 3, 2020 - Recent Landsat 8 Safehold Update On December 19, 2019 at approximately 12:23 UTC, Landsat 8 experienced a spacecraft constraint which triggered entry into a Safehold. The Landsat 8 Flight Operations Team recovered the satellite from the event on December 20, 2019 (DOY 354). The spacecraft resumed nominal on-orbit operations and ground station processing on December 22, 2019 (DOY 356). Data acquired between December 22, 2019 (DOY 356) and December 31, 2019 (DOY 365) exhibit some increased radiometric striping and minor geometric distortions (see image below) in addition to the normal Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) alignment offset apparent in Real-Time tier data. Acquisitions after December 31, 2019 (DOY 365) are consistent with pre-Safehold Real-Time tier data and are suitable for remote sensing use where applicable. All acquisitions after December 22, 2019 (DOY 356) will be reprocessed to meet typical Landsat data quality standards after the next TIRS Scene Select Mirror (SSM) calibration event, scheduled for January 11, 2020. Landsat 8 Operational Land Imager acquisition on December 22, 2019 (path 148/row 044) after the spacecraft resumed nominal on-orbit operations and ground station processing. This acquisition demonstrates increased radiometric striping and minor geometric distortions observed in all data acquired between December 22, 2019 and December 31, 2019. All acquisitions after December 22, 2019 will be reprocessed on January 11, 2020 to achieve typical Landsat data quality standards. Data not acquired during the Safehold event are listed below and displayed in purple on the map (click to enlarge). Map displaying Landsat 8 scenes not acquired from Dec 19-22, 2019 Path 207 Rows 160-161 Path 223 Rows 60-178 Path 6 Rows 22-122 Path 22 Rows 18-122 Path 38 Rows 18-122 Path 54 Rows 18-214 Path 70 Rows 18-120 Path 86 Rows 24-110 Path 102 Rows 19-122 Path 118 Rows 18-185 Path 134 Rows 18-133 Path 150 Rows 18-133 Path 166 Rows 18-222 Path 182 Rows 18-131 Path 198 Rows 18-122 Path 214 Rows 34-122 Path 230 Rows 54-179 Path 13 Rows 18-122 Path 29 Rows 20-232 Path 45 Rows 18-133 After recovering from the Safehold successfully, data acquired on December 20, 2019 (DOY 354) and from most of the day on December 21, 2019 (DOY 355) were ingested into the USGS Landsat Archive and marked as "Engineering". These data are still being assessed to determine if they will be made available for download to users through all USGS Landsat data portals. source: https://www.usgs.gov/land-resources/nli/landsat/january-3-2020-recent-landsat-8-safehold-update
  31. 2 points
    just found this interesting articles on Agisoft forum : source: https://www.agisoft.com/forum/index.php?topic=7851.0
  32. 2 points
    Dapat data shp untuk peta multirawan se Indonesia. silahkan dicek https://drive.google.com/file/d/1anG5xcA9uMo1P9jLeppBvEXpaExJsLhk/view untested, lupa dapet darimana link ini
  33. 1 point
    This is without a doubt the most anticipated feature of the year for CarPlay users, as Google Maps can now replace Apple Maps on the multi-view screen. Apple originally locked the maps card on the CarPlay dashboard to Apple Maps, which means that users weren’t allowed to configure any other application to display real-time information in this panel. It goes without saying this was quite an issue for many users, especially as Google Maps and the Google-owned Waze are extremely popular choices among CarPlay users. The release of iOS 13.4 in April brought massive changes in this regard, as the maps card was unlocked for third parties, essentially allowing any developer of such an app to add support for the dashboard and thus be able to replace Apple Maps. Google, however, has never been in a rush to make the whole thing happen, so here we are in early August finally getting support for Google Maps on the dashboard. What you need to know, however, is that the feature is only available for testers who are part of the beta program, but chances are that support for the multi-view screen on CarPlay will be included in one of the next Google Maps updates rolling out this month for production devices. In the meantime, Waze is yet to get this feature, as not even the beta builds of the app come with it. However, I’m guessing it’s all now just a matter of time until Waze is being updated with dashboard support on CarPlay, and I’m expecting Google to make this happen in its traffic navigation app rather sooner than later. On a side note, Google has also released a new Google Maps update for the stable channel on iOS, bringing the app to version 5.49. This one, however, includes only fixes and improvements, so no dashboard support for now on production devices. source: https://www.autoevolution.com/news/google-releases-the-most-anticipated-google-maps-carplay-feature-for-testers-146802.html#
  34. 1 point
    The United States Space Force’s GPS III program reached another milestone with the successful core mate of GPS III Space Vehicle 08 at Lockheed Martin’s GPS III Processing Facility in Waterton, Colorado, April 15. With core mate complete, the space vehicle was named in honor of NASA trailblazer and “hidden figure” Katherine Johnson. The two-day core mate consisted of using a 10-ton crane to lift and complete a 90-degree rotation of the satellite’s system module, and then slowly lowering the system module onto the satellite’s vertical propulsion core. The two mated major subsystems come together to form an assembled GPS III space vehicle. Despite the COVID-19 pandemic, the Space and Missile Systems Center (SMC) and its mission partner Lockheed Martin ensured that SV08 core mate took place, in accordance with all Centers for Disease Control and local guidelines to minimize exposure or transmission of COVID-19. The GPS III Processing Facility’s cleanroom high bay was restricted to only key personnel directly supporting the operation. “Core mate is the most critical of the GPS space vehicle single-line-flow operations,” said Lt. Col. Margaret Sullivan, program manager and materiel lead for the GPS III program. “Despite the restrictions presented by the COVID-19 pandemic, our team adapted and worked tirelessly to achieve this essential milestone.” Katherine Johnson. When the core mate operation is successfully completed, a GPS III satellite is said to be “born.” In keeping with the team’s tradition of naming GPS III satellites after famous explorers and pioneers, SV08 was named “Katherine Johnson” in honor of the trailblazing NASA mathematician and “human computer” who designed and computed orbital trajectories for NASA’s Mercury, Apollo and space shuttle missions. One of four African-American women at the center of the nonfiction book by Margot Lee Shetterly and the movie Hidden Figures, Johnson was awarded the Presidential Medal of Freedom in 2015 for her groundbreaking contributions to the U.S. space program. Other GPS III satellites have been named in honor of explorers including GPS III SV01 “Vespucci” after Amerigo Vespucci; GPS III SV02 “Magellan” after Ferdinand Magellan; and GPS III SV03 “Columbus” after Christopher Columbus. Next up, performance tests. The next step for the newly christened “Katherine Johnson” is the post-mate Systems Performance Test (SPT) scheduled to begin in August. SPT electrically tests the performance of the satellite during the early phase of build and provides a baseline test data set to be compared to post-environmental test data. GPS III SV08 is currently scheduled to launch in 2022. GPS III is the most powerful GPS satellite ever developed. It is three times more accurate and provides up to eight times improved anti-jamming capability over previous GPS satellites on orbit. GPS III brings new capabilities to users as a fourth civilian signal (L1C), designed to enable interoperability between GPS and international satellite navigation systems, such as Europe’s Galileo system. GPS III satellites will also bring the full capability of the Military Code (M-code) signal, increasing anti-jam resiliency in support of the warfighter. These continued improvements and advancements to the GPS system makes it the premier space-based provider of positioning, navigation, and timing services for more than four billion worldwide. GPS III SV03 to Launch June 30. Launched in December 2018 and August 2019, GPS III SV01 and SV02 became part of today’s operational constellation of 31 satellites, on January 13 and April 1, 2020 respectively. GPS III SV03 is scheduled to launch on June 30. The SMC, located at the Los Angeles Air Force Base, California, is the center of excellence for acquiring and developing military space systems. Its portfolio includes the GPS, military satellite communications, defense meteorological satellites, space launch and range systems, satellite control networks, space based infrared systems, and space situational awareness capabilities. source: https://www.gpsworld.com/gps-iii-sv-08-born-with-core-mate-complete-named-katherine-johnson/
  35. 1 point
    Researchers have developed an algorithm that can distinguish between volcanic and non–volcanic clouds using high-resolution satellite imagery. Called the Cloud Growth Anomaly (CGA) technique, the algorithm uses geostationary satellite data to detect fast growing vertical clouds caused by volcanic output. Volcanic ash produced by eruptions are a major threat to airplanes. In 2011, for example, Grímsvötn erupted, closing Iceland’s air space. Volcanic ash can cause significant damage to airplanes including in-flight engine failure. Researchers noted that “volcanic clouds produced by explosive eruptions can reach jet aircraft cruising altitudes in as little as 5 minutes.” Ten or more eruptions occur each year with a plume reach at or above jet cruising altitudes. Despite this threat, the authors of this study further note that “90% of the world’s volcanoes are not regularly monitored for activity.” Geostationary weather satellites such as Himawari-8 and the GOES East and West satellites provide high resolution data that can be used to detect ash plumes. Currently, Volcanic Ash Advisory Centers (VAACs) tend to manually analyze satellite imagery due to limitations with discerning ash plumes from meteorological clouds using multispectral infrared-based techniques. In this latest study, the CGA technique uses infrared measurements on satellite imagery “to identify cloud objects and compute cloud vertical growth rates from two successive images” produced within 60 minutes of each other. The CGA method was applied to 79 different explosive volcanic events from 30 volcanoes between 2002 and 2017. The success rate of the CGA in correctly identifying ash clouds varied depending on whether it was applied to the latest generations weather satellites. On older satellites, the accuracy rate was about 55%. For new generation satellites such as Himawari-8, the accuracy rate rose to 90%. source: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018EA000410
  36. 1 point
    Changes in ocean circulation may have caused a shift in Atlantic Ocean ecosystems not seen for the past 10,000 years, new analysis of deep-sea fossils has revealed. This is the striking finding of a new study led by a research group I am part of at UCL, funded by the ATLAS project and published in the journal Geophysical Research Letters. The shift has likely already led to political tensions as fish migrate to colder waters. The climate has been quite stable over the 12,000 years or so since the end of the last Ice Age, a period known as the Holocene. It is thought that this stability is what allowed human civilisation to really get going. In the ocean, the major currents are also thought to have been relatively stable during the Holocene. These currents have natural cycles, which affect where marine organisms can be found, including plankton, fish, seabirds and whales. Yet climate change in the ocean is becoming apparent. Tropical coral reefs are bleaching, the oceans becoming more acidic as they absorb carbon from the atmosphere, and species like herring or mackerel are moving towards the poles. But there still seems to be a prevailing view that not much has happened in the ocean so far – in our minds the really big impacts are confined to the future. Looking into the past To challenge this point of view, we had to look for places where seabed fossils not only covered the industrial era in detail, but also stretched back many thousands of years. And we found the right patch of seabed just south of Iceland, where a major deep sea current causes sediment to pile up in huge quantities. To get our fossil samples we took cores of the sediment, which involves sending long plastic tubes to the bottom of the ocean and pushing them into the mud. When pulled out again, we were left with a tube full of sediment that can be washed and sieved to find fossils. The deepest sediment contains the oldest fossils, while the surface sediment contains fossils that were deposited within the past few years. One of the simplest ways of working out what the ocean was like in the past is to count the different species of tiny fossil plankton that can be found in such sediments. Different species like to live in different conditions. We looked at a type called foraminifera, which have shells of calcium carbonate. Identifying them is easy to do using a microscope and small paintbrush, which we use when handling the fossils so they don't get crushed. A recent global study showed that modern foraminifera distributions are different to the start of the industrial era. Climate change is clearly already having an impact. Similarly, the view that modern ocean currents are like those of the past couple of thousand years was challenged by our work in 2018, which showed that the overturning "conveyor belt" circulation was at its weakest for 1,500 years. Our new work builds on this picture and suggests that modern North Atlantic surface circulation is different to anything seen in the past 10,000 years – almost the whole Holocene. The effects of the unusual circulation can be found across the North Atlantic. Just south of Iceland, a reduction in the numbers of cold-water plankton species and an increase in the numbers of warm-water species shows that warm waters have replaced cold, nutrient-rich waters. We believe that these changes have also led to a northward movement of key fish species such as mackerel, which is already causing political headaches as different nations vie for fishing rights. Further north, other fossil evidence shows that more warm water has been reaching the Arctic from the Atlantic, likely contributing to melting sea ice. Further west, a slowdown in the Atlantic conveyor circulation means that waters are not warming as much as we would expect, while furthest west close to the US and Canada the warm gulf stream seems to be shifting northwards which will have profound consequences for important fisheries. One of the ways that these circulation systems can be affected is when the North Atlantic gets less salty. Climate change can cause this to happen by increasing rainfall, increasing ice melt, and increasing the amount of water coming out of the Arctic Ocean. Melting following the peak of the Little Ice Age in the mid 1700s may have triggered an input of freshwater, causing some of the earliest changes that we found, with modern climate change helping to propel those changes beyond the natural variability of the Holocene. We still don't know what has ultimately caused these changes in ocean circulation. But it does seem that the ocean is more sensitive to modern climate changes than previously thought, and we will have to adapt. source: https://www.sciencealert.com/fossils-reveal-our-ocean-is-changing-in-a-ways-it-hasn-t-for-10-000-years
  37. 1 point
    recently founded this : how to install brewer in arcmap 10 https://frew.eri.ucsb.edu/private/ESM263/week/2/Using_ColorBrewer_with_ArcMap_10.html
  38. 1 point
    Five new remote-sensing satellites were sent into planned orbit from the Jiuquan Satellite Launch Center in northwest China's Gobi Desert Thursday. The five satellites were launched by a Long March-11 carrier rocket at 2:42 p.m. (Beijing Time). The satellites belong to a commercial remote-sensing satellite constellation project "Zhuhai-1," which will comprise 34 micro-nano satellites, including video, hyperspectral, and high-resolution optical satellites, as well as radar and infrared satellites. The carrier rocket was developed by the China Academy of Launch Vehicle Technology, and the satellites were produced by the Harbin Institute of Technology and operated by the Zhuhai Orbita Aerospace Science and Technology Co. Ltd. Thursday's launch was the 311th mission for the Long March series carrier rockets. The newly launched satellites comprise four hyperspectral satellites with 256 wave-bands and a coverage width of 150 km, and a video satellite with a resolution of 90 centimeters. The Zhuhai-1 hyperspectral satellites have the highest spatial resolution and the largest coverage width of their type in China. The data will be used for precise quantitative analysis of vegetation, water and crops, and will provide services for building smart cities, said Orbita, the largest private operator of hyperspectral satellites in orbit. The company aims to cooperate with government organizations and enterprises to expand the big data satellite services. source: https://www.spacedaily.com/reports/China_launches_new_remote_sensing_satellites_999.html
  39. 1 point
    Take a look on this: links: https://www.cambridge.org/core/what-we-publish/textbooks untested, maybe you need make free user account first they have nice collection of engineering and geosciences books https://www.cambridge.org/core/what-we-publish/textbooks/listing?aggs[productSubject][filters]=F470FBF5683D93478C7CAE5A30EF9AE8 https://www.cambridge.org/core/what-we-publish/textbooks/listing?aggs[productSubject][filters]=CCC62FE56DCC1D050CA1340C1CCF46F5
  40. 1 point
    The Gridded Population of the World (GPW) collection, now in its fourth version (GPWv4), models the distribution of human population (counts and densities) on a continuous global raster surface. Since the release of the first version of this global population surface in 1995, the essential inputs to GPW have been population census tables and corresponding geographic boundaries. The purpose of GPW is to provide a spatially disaggregated population layer that is compatible with data sets from social, economic, and Earth science disciplines, and remote sensing. It provides globally consistent and spatially explicit data for use in research, policy-making, and communications. For GPWv4, population input data are collected at the most detailed spatial resolution available from the results of the 2010 round of Population and Housing Censuses, which occurred between 2005 and 2014. The input data are extrapolated to produce population estimates for the years 2000, 2005, 2010, 2015, and 2020. A set of estimates adjusted to national level, historic and future, population predictions from the United Nation’s World Population Prospects report are also produced for the same set of years. The raster data sets are constructed from national or subnational input administrative units to which the estimates have been matched. GPWv4 is gridded with an output resolution of 30 arc-seconds (approximately 1 km at the equator). The nine data sets of the current release are collectively referred to as the Revision 11 (or v4.11) data sets. In this release, several issues identified in the 4.10 release of December 2017 have been corrected as follows: The extent of the final gridded data has been updated to a full global extent. Erroneous no data pixels in all of the gridded data were recoded as 0 in cases where census reported known 0 values. The netCDF files were updated to include the Mean Administrative Unit Area layer, the Land Area and Water Area layers, and two layers indicating the administrative level(s) of the demographic characteristics input data. The National Identifier Grid was reprocessed to remove artefacts from inland water. In addition, two attributes were added to indicate the administrative levels of the demographic characteristics input data, and the data set zip files were corrected to include the National Identifier Polygons shapefile. Two new classes (Total Land Pixels and Ocean Pixels) were added to the Water Mask. The administrative level names of the Greece Administrative Unit Centre Points were translated to English. Separate rasters are available for population counts and population density consistent with national censuses and population registers, or alternative sources in rare cases where no census or register was available. All estimates of population counts and population density have also been nationally adjusted to population totals from the United Nation’s World Population Prospects: The 2015 Revision. In addition, rasters are available for basic demographic characteristics (age and sex), data quality indicators, and land and water areas. A vector data set of the centrepoint locations (centroids) for each of the input administrative units and a raster of national level numeric identifiers are included in the collection to share information about the input data layers. The raster data sets are now available in ASCII (text) format as well as in GeoTIFF format. Five of the eight raster data sets are also available in netCDF format. In addition, the native 30 arc-second resolution data were aggregated to four lower resolutions (2.5 arc-minute, 15 arc-minute, 30 arc-minute, and 1 degree) to enable faster global processing and support of research communities that conduct analyses at these resolutions. All of these resolutions are available in ASCII and GeoTIFF formats. NetCDF files are available at all resolutions except 30 arc-second. All spatial data sets in the GPWv4 collection are stored in geographic coordinate system (latitude/longitude).
  41. 1 point
    really nice, is it possible to leverage into forecast? that would be interesting
  42. 1 point
    New algorithm solves complex problems more easily and more accurately on a personal computer while requiring less processing power than a supercomputer The exponential growth in computer processing power seen over the past 60 years may soon come to a halt. Complex systems such as those used in weather forecast, for example, require high computing capacities, but the costs for running supercomputers to process large quantities of data can become a limiting factor. Researchers at Johannes Gutenberg University Mainz (JGU) in Germany and Università della Svizzera italiana (USI) in Lugano in Switzerland have recently unveiled an algorithm that can solve complex problems with remarkable facility – even on a personal computer. Exponential growth in IT will reach its limit In the past, we have seen a constant rate of acceleration in information processing power as predicted by Moore's Law, but it now looks as if this exponential rate of growth is limited. New developments rely on artificial intelligence and machine learning, but the related processes are largely not well-known and understood. "Many machine learning methods, such as the very popular deep learning, are very successful, but work like a black box, which means that we don't know exactly what is going on. We wanted to understand how artificial intelligence works and gain a better understanding of the connections involved," said Professor Susanne Gerber, a specialist in bioinformatics at Mainz University. Together with Professor Illia Horenko, a computer expert at Università della Svizzera italiana and a Mercator Fellow of Freie Universität Berlin, she has developed a technique for carrying out incredibly complex calculations at low cost and with high reliability. Gerber and Horenko, along with their co-authors, have summarized their concept in an article entitled "Low-cost scalable discretization, prediction, and feature selection for complex systems" recently published in Science Advances. "This method enables us to carry out tasks on a standard PC that previously would have required a supercomputer," emphasized Horenko. In addition to weather forecasts, the research see numerous possible applications such as in solving classification problems in bioinformatics, image analysis, and medical diagnostics. Breaking down complex systems into individual components The paper presented is the result of many years of work on the development of this new approach. According to Gerber and Horenko, the process is based on the Lego principle, according to which complex systems are broken down into discrete states or patterns. With only a few patterns or components, i.e., three or four dozen, large volumes of data can be analyzed and their future behavior can be predicted. "For example, using the SPA algorithm we could make a data-based forecast of surface temperatures in Europe for the day ahead and have a prediction error of only 0.75 degrees Celsius," said Gerber. It all works on an ordinary PC and has an error rate that is 40 percent better than the computer systems usually used by weather services, whilst also being much cheaper. SPA or Scalable Probabilistic Approximation is a mathematically-based concept. The method could be useful in various situations that require large volumes of data to be processed automatically, such as in biology, for example, when a large number of cells need to be classified and grouped. "What is particularly useful about the result is that we can then get an understanding of what characteristics were used to sort the cells," added Gerber. Another potential area of application is neuroscience. Automated analysis of EEG signals could form the basis for assessments of cerebral status. It could even be used in breast cancer diagnosis, as mammography images could be analyzed to predict the results of a possible biopsy. "The SPA algorithm can be applied in a number of fields, from the Lorenz model to the molecular dynamics of amino acids in water," concluded Horenko. "The process is easier and cheaper and the results are also better compared to those produced by the current state-of-the-art supercomputers." The collaboration between the groups in Mainz and Lugano was carried out under the aegis of the newly-created Research Center Emergent Algorithmic Intelligence, which was established in April 2019 at JGU and is funded by the Carl Zeiss Foundation. https://www.uni-mainz.de/presse/aktuell/10864_ENG_HTML.php
  43. 1 point
    Google announced Dataset Search, a service that lets you search for close to 25 million different publicly available data sets, is now out of beta. Dataset Search first launched in September 2018. Researchers can use these data sets, which range from pretty small ones that tell you how many cats there were in the Netherlands from 2010 to 2018 to large annotated audio and image sets, to check their hypotheses or train and test their machine learning models. The tool currently indexes about 6 million tables. With this release, Dataset Search is getting a mobile version and Google is adding a few new features to Dataset Search. The first of these is a new filter that lets you choose which type of data set you want to see (tables, images, text, etc.), which makes it easier to find the right data you’re looking for. In addition, the company has added more information about the data sets and the organizations that publish them. Searched 'remote sensing' and found this Geographic information A lot of the data in the search index comes from government agencies. In total, Google says, there are about 2 million U.S. government data sets in the index right now. But you’ll also regularly find Google’s own Kaggle show up, as well as a number of other public and private organizations that make public data available, as well. As Google notes, anybody who owns an interesting data set can make it available to be indexed by using a standard schema.org markup to describe the data in more detail. Source
  44. 1 point
  45. 1 point
    Look forward to play mario and legend of zelda on those device 😁
  46. 1 point
    SELECT *, st_buffer(geom,50) as geom INTO gis_osm_pois_buf FROM gis_osm_pois; add * in your query to include all your fields. As your new geom field is came from the st_buffer named it a geom/geometry/the_geom or as you prefer.
  47. 1 point
    The GeoforGood Summit 2019 drew its curtains close on 19 Sep 2019 and as a first time attendee, I was amazed to see the number of new developments announced at the summit. The summit — being a first of its kind — combined the user summit and the developers summit into one to let users benefit from the knowledge of new tools and developers understand the needs of the user. Since my primary focus was on large scale geospatial modeling, I attended the workshops and breakout sessions related to Google Earth Engine only. With that, let’s look at 3 new exciting developments to hit Earth Engine Updated documentation on machine learning Documentation really? Yes! As an amateur Earth Engine user myself, my number one complaint of the tool has been its abysmal quality of documentation spread between its app developers site, Google Earth’s blog, and their stack exchange answers. So any updates to the documentation is welcome. I am glad that the documentation has been updated to help the ever-exploding user base of geospatial data scientists interested in implementing machine learning and deep learning models. The documentation comes with its own example Colab notebooks. The Example notebooks include supervised classification, unsupervised classification, dense neural network, convolutional neural network, and deeplearning on Google Cloud. I found that these notebooks were incredibly useful to me to get started as there are quite a few non-trivial data type conversions ( int to float32 and so on) in the process flow. Earth Engine and AI Platform Integration Nick Clinton and Chris Brown jointly announced the much overdue Earth Engine + Google AI Platform integration. Until now, users were essentially limited to running small jobs on Google Colab’s virtual machine (VM) and hoping that the connection with the VM doesn’t time out (which usually lasts for about 4 hours). Other limitations include lack of any task monitoring or queuing capabilities. Not anymore! The new ee.Model() package let’s users communicate with a Google Cloud server that they can spin up based on their own needs. Needless to say, this is a HUGE improvement over the previous primitive deep learning support provided on the VM. Although it was free, one could simply not train, validate, predict, and deploy any model larger than a few layers. It had to be done separately on the Google AI Platform once the .TFRecord objects were created in their Google bucket. With this cloud integration, that task has been simplified tremendously by letting users run and test their models right from the Colab environment. The ee.Model() class comes with some useful functions such as ee.Model.fromAIPlatformPredictor() to make predictions on Earth Engine data directly from your model sitting on Google Cloud. Lastly, since your model now sits in the AI Platform, you can cheat and use your own models trained offline to predict on Earth Engine data and make maps of its output. Note that your model must be saved using tf.contrib.saved_model format if you wish to do so. The popular Keras function model.save_model('model.h5') is not compatible with ee.Model(). Moving forward, it seems like the team plans to stick to the Colab Python IDE for all deep learning applications. However, it’s not a death blow for the loved javascript code editor. At the summit, I saw that participants still preferred the javascript code editor for their non-neural based machine learning work (like support vector machines, random forests etc.). Being a python lover myself, I too go to the code editor for quick visualizations and for Earth Engine Apps! I did not get to try out the new ee.Model() package at the summit but Nick Clinton demonstrated a notebook where a simple working example has been hosted to help us learn the function calls. Some kinks still remain in the development— like limiting a convolution kernel to only 144 pixels wide during prediction because of “the way earth engine communicates with cloud platform” — but he assured us that it will be fixed soon. Overall, I am excited about the integration because Earth Engine is now a real alternative for my geospatial computing work. And with the Earth Engine team promising more new functions in the ee.Model() class, I wonder if companies and labs around the world will start migrating their modeling work to Earth Engine. Cooler Visualizations! Matt Hancher and Tyler Erickson displayed some new functionality related to visualizations and I found that it made it vastly simpler to make animated visuals. With ee.ImageCollection.getVideoThumbURL() function, you can create your own animated gifs within a few seconds! I tried it on a bunch of datasets and the speed of creating the gifs was truly impressive. Say bye to exporting each iteration of a video to your drive because these gifs appear right at the console using the print() command! Shown above is an example of global temperature forecast by time from the ‘NOAA/GFS0P25’ dataset. The code for making the gif can be found here. The animation is based on the example shown in the original blog post by Michael DeWitt and I referred to this gif-making tutorial on the developers page to make it. I did not get to cover all the new features and functionality introduced at the summit. For that, be on the lookout for event highlights on Google Earth’s blog. Meanwhile, you can check out the session resources from the summit for presentations and notebooks on topics that you are interested in. Presentation and resources Published in Medium
  48. 1 point
    one of my favorite image hosting, , this is their announcement : Rest in Peace TinyPic
  49. 1 point
    Which WEBGIS Software can be used with Google Cloud??
  50. 1 point
    It does really work. You probably missed one step or two during the process. One method you can do to check whether the file is NT or non-NT is by using GPSMapEdit, because it won't open NT format otherwise it will. Here I show you the snapshot when I try to open an NT format file in GPSMapEdit. Here is the same file with a non-NT format. OK, since I cannot edit my previous post, I am going to re-explain the procedures here. I enclose some snapshots for the clarity. Open the GMAP Tool Add the NT formatted img file/s (in my case, filename is 62320070.img) Go to Split tab and create subfiles. Click Split All. Download Garmin-GMP-extractor.exe tool and then put it in the same folder with your working files. Drag the GMP file into the Garmin GMP extractor tools. It will explod/extract the GMP fiel into five type of subfiles (.LBL, .NET, .NOD, .RGN and .TRE) Back to GMAPTool. Add those subfiles. Go to Join tab. Name the output file and directory. We can give mapset name. And the click Join all. FINALLY, the result is another img file with non-NT format. If you watch closely there is a slight difference of filesize between both files.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.