Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 10/27/2020 in all areas

  1. The all-virtual Esri User Conference 2021 just dropped the curtain after a four-day event. Here's whats new. Everything new explained by Jack Dangermond. ArcGIS Image is a software for remote sensing over cloud. ArcGIS Velocity gets real-time data visualization maps. ArcGIS Enterprise installation using Kubernetes. More experiments with field survey. 😑 More integrated BIM for ArcGIS. Maps SDK for game developers. Cool presentation though! What's new in ArcGIS Online. What's new in ArcGIS Pro. Moreover. ArcGIS Desktop will be supported until 2026 and Pro 2.6 in due Q2 next year. AI or GeoAI will be more ubiquitous, so will 3D mapping and sensor-based real-time data processing. I am hoping that ArcGIS Online credit cost to come down and easier to purchase.
    2 points
  2. Six years ago, we compared ArcGIS vs QGIS. The response was incredible and we thank you for that. But since then, the game has changed. Yet, the players are still the same. The Omen of Open Source GIS is back with QGIS 3. It’s up against the Pioneer of Proprietary GIS, ArcGIS Pro. Buckle up. Because today, you’re going to witness a head-to-head battle between the juggernauts of GIS software. Pick your poison. Table of Contents 1. 3D 2. Interface 3. Coordinate Systems 4. Catalog 5. Editing 6. Vector Analysis 7. Remote Sensing 8. Speed 9. Tables 10. Statistics 11. Raster Analysis 12. Networks 13. ETL 14. Scripting 15. Labeling 16. Map Automation 17. Animation 18. Map Types 19. Topology 20. Interoperability 21. Geocoding 22. Symbology 23. LiDAR 24. Map Elements 25. Metadata 26. Database 27. Web Maps 28. Errors 29. Cost 30. Extras 31. Imagery 32. File Structure 33. Community 34. Emerging Tech 35. Documentation https://gisgeography.com/arcgis-pro-vs-qgis-3/
    2 points
  3. Download Autodesk Civil 3D https://getintopc.com/softwares/3d-cad/autodesk-civil-3d-2020-free-download-2785609/
    1 point
  4. On February 7, 2017, the twentieth and final inclination (Delta-I) maneuver of Landsat 7 took place. (Delta-I maneuvers keep the spacecraft in the correct orbital position to ensure it maintains its 10:00 am ± 15 minutes mean local time (MLT) equatorial crossing.) Landsat 7 reached its peak outermost inclination boundary of 10:14:58 MLT on August 11, 2017. Landsat 7 is now drifting in its inclination and will fall back to 09:15 am MLT by July 2021. The chart below illustrates the inclination trend from June 2014 to June 2026. The USGS and NASA are planning for Landsat 7 to remain on-station and fulfilling its current science mission until Landsat 9 completes its launch (scheduled for September 16, 2021), on-orbit checkout, and commissioning. Sometime after Landsat 9 is nominally acquiring science mission data, Landsat 7 will exit the constellation and lower its orbit by 8 km to prepare for servicing by NASA’s On-Orbit Servicing, Assembly, and Manufacturing-1 (OSAM-1) mission. The mission - the first of its kind in low Earth orbit - will provide Landsat 7 with the needed fuel for a successful decommissioning. source: https://www.usgs.gov/core-science-systems/nli/landsat/landsat-7?qt-science_support_page_related_con=0#qt-science_support_page_related_con
    1 point
  5. Many of Africa’s agricultural endeavors have long been tied to whims of the weather. When it rains, a country’s gross domestic product might soar. When it doesn’t rain, economies suffer. The reliance has been driven in part by the perception that dry, arid Africa has limited water resources. But a new study, years in the making, shows a different reality. As one South African scientist recently noted, if all the rainfall stopped today and for the next 100 years in Africa, there would still be plenty of water stored underneath the continent’s surface, it just wouldn’t be evenly distributed. That’s why maps are essential in showing which aquifers are vulnerable to rainfall variability. “You can imagine the possibilities,” said hydrologist Seifu Kebede Gurmessa from the University of KwaZulu-Natal in South Africa and coauthor of the study. The study, released in February, uses maps from a geographic information system (GIS) analysis to show water replenishment across the continent. It turns out that the vast majority of Africa’s countries either have high water storage or high levels of groundwater replenishment. Five countries have both. Five have neither. “We say we are prisoners of the rainfall,” Gurmessa said of Africa’s dependence on the resource for agriculture, one of the continent’s largest economic outputs. Little groundwater, proportionately, is used for irrigation currently. “How can we break that imprisonment of seasonality in the rainfall?” Groundwater use could be a buffer for the stark seasonal swings. Important Water Discoveries The report, Mapping Groundwater Recharge in Africa from Ground Observations and Implications for Water Security, was led by the British Geological Survey (BGS) and is a sequel to another of the BGS’s groundbreaking studies. Using a geographic information system to aggregate information and perform spatial analysis, the report’s authors brought old data into the present by incorporating factors that impact groundwater recharge including climate, amount of rainfall, the number of wet days in a year, land cover, vegetation health, and soil type. Nearly a decade ago, the team of international scientists created a map that showed Africa actually had a rather large volume of water hidden and stored underneath the surface. However, the researchers behind the science-shifting report and, later, the BGS’s Africa Groundwater Atlas knew that that was only part of the continent’s water story. Groundwater, like a bank account, depends on regular deposits to balance withdrawals. Once again, results of the latest research were promising. The study’s authors were able to clearly map, for the first time, which countries had sustainable resources and which ones didn’t. The countries were separated into four color-coded categories: low storage/low recharge, high storage/low recharge, low storage/high recharge, and high storage/high recharge. “That map is really key in proposing what you can do in different countries,” Gurmessa said. Click on the image to see a larger map of groundwater recharge and storage in African countries. Most countries had one or the other—high storage or high recharge. Alan MacDonald, the study’s leader and a hydrogeologist with the British Geological Survey, has called it a happy symmetry. “Still,” he said, “so many people in Africa don’t have any access to safe water.” He and the others involved hope their comprehensive research starts a conversation—just like their first report did in 2012—about what’s possible and what isn’t. For instance, what kind of access to water should be installed in a village if groundwater is plentiful but rainfall is scarce? While none of the scientists involved contemplates a pumping free-for-all that could deplete groundwater, the report does suggest the continent has been faring better than might have been expected in maintaining a healthy water supply. “It’s not all doom and gloom. It’s not all bad. In some areas, there is potential for groundwater to provide safe water supplies for many more people than currently have them,” said Kirsty Upton, who oversees BGS’s Africa Groundwater Atlas. As the report itself states, “With increasing calls to draw from groundwater storage in order to stimulate economic growth and improve food security in Africa, a more nuanced approach to water security is necessary.” Making Sustainable Plans The team’s 2012 countrywide study of the continent’s groundwater conditions—a first of its kind that attracted media attention—led government ministers to hang the study’s maps from office walls. The work also encouraged funding to help 50 Africa-based partners to create a continental groundwater atlas, with data downloaded thousands of times by nongovernmental organizations, governments, students, and researchers. The study’s research and data were also the foundation for the latest groundwater recharge report, which reveals how sustainable the water supply is. “That was the next stage for us,” MacDonald said. Ten scientists including MacDonald—five in Africa and the others from around the world—promptly got to work, pouring over 320 existing studies to find the most reliable information as well as common themes. They were about to publish in 2017 until MacDonald, noticing that some of the geolocation data for the original reference studies was off, started from the very beginning again to reanalyze the information. “You want to get this right,” he said considering the importance of the data and how far its reach may be. He recalled a moment, shortly after the 2012 study on groundwater was released, when he met two French mountaineers who had a copy of the daily newspaper Le Monde. “And there was a picture of my maps,” he said. He likened the maps, including the most recent study on replenishment, to a conversation starter. Click on the image to see a larger map of African rainfall and groundwater recharge. “If you do get a map that people are really going to look at and use, you want to make sure that you’re giving them information that is useful to them and is a gateway to more information, and not misleading people,” MacDonald said. The researchers continue to be curious too, looking at additional facets. They’re already developing their next study, looking at the quality—primarily the salinity—of Africa’s groundwater. A Promising Future For the most recent study, researchers focused on long-term average groundwater recharge rates across Africa from 1970 to 2019. They used 134 existing studies deemed the most reliable, winnowing the total down from 320 and factoring in climate and terrestrial parameters to scale for the entire continent. The process wasn’t quick, easy, or highly technical. In other projects, MacDonald said, he has used data from the National Aeronautics and Space Administration (NASA) GRACE satellite, which measures water storage changes from space, averaging over a large area (400 x 400 km) to indicate whether an area’s water has been recently depleted. “But it only gets you so far,” he said, and this time the researchers needed to look in much more detail to understand water renewability on the continent. “It was sheer old-fashioned grunt work.” He and the others went through old files and maps, some found on dusty shelves. The result of this investigation—funded primarily by the UPGro research program, whose mission is Unlocking the Potential of Groundwater for the Poor—was published in Environmental Research Letters in February. “It is really a good time to be a groundwater expert in this decade in Africa,” Gurmessa said. “The future also looks more promising.” Access to and availability of water can affect a whole host of issues, ranging from school attendance to conflict that comes from agriculture workers migrating from one rural area to another, not to mention overall human health. Water is tied to everything in one’s life, he pointed out. source: https://www.esri.com/about/newsroom/blog/africa-groundwater-mapping/
    1 point
  6. Classification of precipitation change regimes based on changes in the precipitation mean state and variability. Shading indicates the ratio of change in precipitation variability and mean precipitation. Climate models predict that rainfall variability over wet regions globally will be greatly enhanced by global warming, causing wide swings between dry and wet conditions, according to a joint study by the Institute of Atmospheric Physics (IAP) of the Chinese Academy of Sciences (CAS) and the Met Office, the UK's national meteorological service. This study was published in Science Advances on July 28 2021. Increased rainfall leads to floods, less rainfall to drought. Researchers realized decades ago that global warming drives increased rainfall on average. How this increase is delivered in time matters enormously. A 2 to 3 percent increase of annual precipitation uniformly spreading across the year does not mean much, but if it falls in a week or a day, it will cause havoc. Using large ensembles of state-of-the-art climate model simulations, this study highlights the increase in rainfall variability across a range of time scales from daily to multiyear. Scientists have found that in a future warming world, climatologically wet regions (including the tropics, monsoon regions and mid- to high-latitudes) will not only get wetter on average, but also swing widely between wet and dry conditions. "As climate warms, climatologically wet regions will generally get wetter and dry regions get drier. Such a global pattern of mean rainfall change is often described as 'wet-get-wetter'. By analogy, the global pattern of rainfall variability change features a 'wet-get-more variable' paradigm. Moreover, the global mean increase in rainfall variability is more than twice as fast as the increase in mean rainfall in a percentage sense," said Zhou Tianjun, corresponding author of the study. Zhou is a senior scientist at IAP. He is also a professor at the University of Chinese Academy of Sciences. The enhanced rainfall variability, to a first order, is due to increased water vapor in the air as climate warms, but is partly offset by the weakening circulation variability. The latter dominates regional patterns of change in rainfall variability. By considering changes in both the mean state and variability of precipitation, the research provides a new perspective for interpreting future precipitation change regimes. "Around two-thirds of land will face a 'wetter and more variable' hydroclimate, while the remaining land regions are projected to become 'drier but more variable' or 'drier and less variable'. This classification of different precipitation change regimes is valuable for regional adaptation planning," said Zhang Wenxia, lead author of the study. "The globally amplified rainfall variability manifests the fact that global warming is making our climate more uneven—more extreme in both wet and dry conditions, with wider and probably more rapid transitions between them," said Kalli Furtado, expert scientist at the Met Office and second author of the study. "The more variable rainfall events could further translate into impacts on crop yields and river flows, challenging the existing climate resilience of infrastructures, human society and ecosystems. This makes climate change adaptation more difficult." source: https://phys.org/news/2021-07-rainfall-increasingly-variable-climate.html
    1 point
  7. A global land cover GeoTIFF was recently released by Impact Observatory (IO) and Esri. To create this geospatial layer, hundreds of thousands of satellite photos were classified into ten unique land use/land cover (LULC) classes using a deep learning model in partnership with Microsoft AI for Earth. Sentinel-2 imagery was used to divide the world into ten categories of land use cover: Water (areas that are predominately water such as rivers ponds, lakes, and ocean) Trees (clusters that are at least 10 meters high) Grasslands such as open savannas, parks, and golf courses Flooded vegetation such as wetlands, rice paddies, and Crops Scrubland Built areas such as urban/suburban, highways, railways, and paved areas. Bare ground in areas with little or no vegetation such as exposed rock/soil and sparsely vegetated deserts. Permanent snow and ice areas Cloud cover areas where the persistent cloud cover prevents an analysis of the underlying land cover. The end product is a 10-meter resolution GeoTiff that the developers have released under a Creative Commons 4.0 license. The machine learning model was run on multiple dates throughout the year with the results folded into one consolidated layer to represent land use cover for the year 2020. The 2020 Esri Land Cover dataset can be browsed using Esri’s online Map Viewer. Users can also access the full global GeoTIFF zip file or use Esri’s tool for accessing the land use data by tile.
    1 point
  8. While the concept of “deepfakes,” or AI-generated synthetic imagery, has been decried primarily in connection with involuntary depictions of people, the technology is dangerous (and interesting) in other ways as well. For instance, researchers have shown that it can be used to manipulate satellite imagery to produce real-looking — but totally fake — overhead maps of cities. The study, led by Bo Zhao from the University of Washington, is not intended to alarm anyone but rather to show the risks and opportunities involved in applying this rather infamous technology to cartography. In fact their approach has as much in common with “style transfer” techniques — redrawing images in an impressionistic, crayon and arbitrary other fashions — than with deepfakes as they are commonly understood. The team trained a machine learning system on satellite images of three different cities: Seattle, nearby Tacoma and Beijing. Each has its own distinctive look, just as a painter or medium does. For instance, Seattle tends to have larger overhanging greenery and narrower streets, while Beijing is more monochrome and — in the images used for the study — the taller buildings cast long, dark shadows. The system learned to associate details of a street map (like Google or Apple’s) with those of the satellite view. The resulting machine learning agent, when given a street map, returns a realistic-looking faux satellite image of what that area would look like if it were in any of those cities. In the following image, the map corresponds to the top right satellite image of Tacoma, while the lower versions show how it might look in Seattle and Beijing. A close inspection will show that the fake maps aren’t as sharp as the real one, and there are probably some logical inconsistencies like streets that go nowhere and the like. But at a glance the Seattle and Beijing images are perfectly plausible. One only has to think for a few minutes to conceive of uses for fake maps like this, both legitimate and otherwise. The researchers suggest that the technique could be used to simulate imagery of places for which no satellite imagery is available — like one of these cities in the days before such things were possible, or for a planned expansion or zoning change. The system doesn’t have to imitate another place altogether — it could be trained on a more densely populated part of the same city, or one with wider streets. It could conceivably even be used, as this rather more whimsical project was, to make realistic-looking modern maps from ancient hand-drawn ones. source: https://techcrunch.com/2021/04/22/deepfake-tech-takes-on-satellite-maps/
    1 point
  9. From space, large decks of closely spaced stratocumulus clouds appear like bright cotton balls hovering over the ocean. They cover vast areas—literally thousands of miles of the subtropical oceans—and linger for weeks to months. Because these marine clouds reflect more solar radiation than the surface of the ocean, cooling the Earth's surface, the lifetime of stratocumulus clouds is an important component of the Earth's radiation balance. It is necessary, then, to accurately represent cloud lifetimes in the earth system models (ESM) used to predict future climate conditions. Turbulence—air motions occurring at small scales—is primarily responsible for the longevity of marine stratocumulus clouds. Drizzle—precipitation comprising water droplets smaller than half a millimeter in diameter—is constantly present within and below these marine cloud systems. Because these tiny drops affect and are affected by turbulence below marine clouds, scientists need to know more about how drizzle affects turbulence in these clouds to enable more accurate climate forecasts. A team led by Virendra Ghate, an atmospheric scientist, and Maria Cadeddu, a principal atmospheric research engineer in the Environmental Science division at the U.S. Department of Energy's (DOE) Argonne National Laboratory, has been studying the impact of drizzle inside marine clouds since 2017. Their unique data set caught the attention of researchers at DOE's Lawrence Livermore National Laboratory. About three years ago, a collaborator from Livermore, which led national efforts to improve cloud representation in climate models, called for observational studies focusing on drizzle-turbulence interactions. Such studies did not exist at that time because of the limited set of observations and lack of techniques to derive all the geophysical properties of concern. "The analysis of the developed dataset allowed us to show that drizzle decreases turbulence below stratocumulus clouds—something that was only shown by model simulations in the past," said Ghate. "The richness of the developed data will allow us to address several fundamental questions regarding drizzle-turbulence interactions in the future." The Argonne team set out to characterize the clouds' properties using observations at the Atmospheric Radiation Measurement (ARM)'s Eastern North Atlantic site, a DOE Office of Science User Facility, and data from instruments on board geostationary and polar‐orbiting satellites. The instruments collect engineering variables, such as voltages and temperatures. The team combined measurements from different instruments to derive properties of the water vapor and drizzle in and below the clouds. Ghate and Cadeddu were interested in geophysical variables, such as cloud water content, drizzle particle size and others. So they developed a novel algorithm that synergistically retrieved all the necessary parameters involved in drizzle-turbulence interactions. The algorithm uses data from several ARM instruments—including radar, lidar and radiometer—to derive the geophysical variables of interest: size (or diameter) of precipitation drops, amount of liquid water corresponding to cloud drops, and precipitation drops. Using the data from ARM, Ghate and Cadeddu derived these parameters, subsequently publishing three observational studies that focused on two different spatial organizations of stratocumulus clouds to characterize the drizzle-turbulence interactions in these cloud systems. Their results led to a collaborative effort with modelers from Livermore. In that effort, the team used observations to improve the representation of drizzle-turbulence interactions in DOE's Energy Exascale Earth System Model (E3SM). "The observational references from Ghate and Cadeddu's retrieval technique helped us determine that version 1 of E3SM produces unrealistic drizzle processes. Our collaborative study implies that comprehensive examinations of the modeled cloud and drizzle processes with observational references are needed for current climate models," said Xue Zheng, a staff scientist in the Atmospheric, Earth, and Energy division at Livermore. Said Cadeddu: "Generally, the unique expertise here at the lab is attributable to our ability to go from the raw data to the physical parameters and from there to the physical processes in the clouds. The data and the instruments themselves are very difficult to use because they are mostly remote sensors that don't directly measure what we need (e.g., rain rate or liquid water path); instead, they measure electromagnetic properties such as backscatter, Doppler spectra and radiance. In addition, the raw signal is often affected by artifacts, noise, aerosols and precipitation. The raw data are either directly related to the physical quantities we want to measure through well-defined sets of equations, or they are indirectly related. In the latter case, deriving the physical quantities means solving mathematical equations called 'inverse problems' which, by themselves, are complicated. The fact that we have been able to develop new ways to quantify the physical properties of the clouds and extract reliable information about them is a major achievement. And it has put us at the forefront of research on these types of clouds." Because they have focused only on the few aspects of the complex drizzle-turbulence interactions, Ghate and Cadeddu plan to continue their research. They also intend to focus on other regions such as the North Pacific and South Atlantic oceans, where the cloud, drizzle and turbulence properties differ vastly from those in the North Atlantic. source: https://phys.org/news/2021-03-algorithm-capture-drizzle-turbulence-interactions-future.html
    1 point
  10. A recent analysis of the latest generation of climate models — known as a CMIP6 — provides a cautionary tale on interpreting climate simulations as scientists develop more sensitive and sophisticated projections of how the Earth will respond to increasing levels of carbon dioxide in the atmosphere. Researchers at Princeton University and the University of Miami reported that newer models with a high “climate sensitivity” — meaning they predict much greater global warming from the same levels of atmospheric carbon dioxide as other models — do not provide a plausible scenario of Earth’s future climate. Those models overstate the global cooling effect that arises from interactions between clouds and aerosols and project that clouds will moderate greenhouse gas-induced warming — particularly in the northern hemisphere — much more than climate records show actually happens, the researchers reported in the journal Geophysical Research Letters. Instead, the researchers found that models with lower climate sensitivity are more consistent with observed differences in temperature between the northern and southern hemispheres, and, thus, are more accurate depictions of projected climate change than the newer models. The study was supported by the Carbon Mitigation Initiative (CMI) based in Princeton’s High Meadows Environmental Institute (HMEI). These findings are potentially significant when it comes to climate-change policy, explained co-author Gabriel Vecchi, a Princeton professor of geosciences and the High Meadows Environmental Institute and principal investigator in CMI. Because models with higher climate sensitivity forecast greater warming from greenhouse gas emissions, they also project more dire — and imminent — consequences such as more extreme sea-level rise and heat waves. The high climate-sensitivity models forecast an increase in global average temperature from 2 to 6 degrees Celsius under current carbon dioxide levels. The current scientific consensus is that the increase must be kept under 2 degrees to avoid catastrophic effects. The 2016 Paris Agreement sets the threshold to 1.5 degrees Celsius. “A higher climate sensitivity would obviously necessitate much more aggressive carbon mitigation,” Vecchi said. “Society would need to reduce carbon emissions much more rapidly to meet the goals of the Paris Agreement and keep global warming below 2 degrees Celsius. Reducing the uncertainty in climate sensitivity helps us make a more reliable and accurate strategy to deal with climate change.” The researchers found that both the high and low climate-sensitivity models match global temperatures observed during the 20th century. The higher-sensitivity models, however, include a stronger cooling effect from aerosol-cloud interaction that offsets the greater warming due to greenhouse gases. Moreover, the models have aerosol emissions occurring primarily in the northern hemisphere, which is not consistent with observations. “Our results remind us that we should be cautious about a model result, even if the models accurately represent past global warming,” said first author Chenggong Wang, a Ph.D. candidate in Princeton’s Program in Atmospheric and Oceanic Sciences. “We show that the global average hides important details about the patterns of temperature change.” In addition to the main findings, the study helps shed light on how clouds can moderate warming both in models and the real world at large and small scales. “Clouds can amplify global warming and may cause warming to accelerate rapidly during the next century,” said co-author Wenchang Yang, an associate research scholar in geosciences at Princeton. “In short, improving our understanding and ability to correctly simulate clouds is really the key to more reliable predictions of the future.” Scientists at Princeton and other institutions have recently turned their focus to the effect that clouds have on climate change. Related research includes two papers by Amilcare Porporato, Princeton’s Thomas J. Wu ’94 Professor of Civil and Environmental Engineering and the High Meadows Environmental Institute and a member of the CMI leadership team, that reported on the future effect of heat-induced clouds on solar power and how climate models underestimate the cooling effect of the daily cloud cycle. “Understanding how clouds modulate climate change is at the forefront of climate research,” said co-author Brian Soden, a professor of atmospheric sciences at the University of Miami. “It is encouraging that, as this study shows, there are still many treasures we can exploit from historical climate observations that help refine the interpretations we get from global mean-temperature change.” source: https://environment.princeton.edu/news/high-end-of-climate-sensitivity-in-new-climate-models-seen-as-less-plausible/
    1 point
  11. The Indian Space Research Organisation opened its space calendar 2021 with the successful launch of PSLV-C51 carrying Amazonia-1 and 18 other satellites on Sunday. PSLV-C51 carrying Amazonia-1, an optical earth observation satellite from Brazil, and 18 other satellites lifted off from the first launch pad at Satish Dhawan Space Centre in Sriharikota at 10.24am. Around 17 minutes after lift-off and one minute after the PS4 engine cut-off, PSLV placed its primary payload -- 637kg weighing Amazonia-1 – in a sun synchronous polar orbit. After placing Amazonia-1 in the orbit, the rocket coasted for 54 minutes before the first restart of the upper stage engine and cut-off in nine seconds. The second coasting phase lasted 48 minutes before the second restart for eight seconds PS4 and cut-off. Around one minute later, PSLV started placing the first of the remaining 18 satellites. In the next four minutes, the rocket placed all the satellites in orbits. The rocket’s journey lasted around two hours. The 18 other satellites in the mission included Satish Dhawan SAT (SDSAT) built by Space Kidz India and UNITYsat, a combination of three satellites, designed and built by three colleges -- Sri Shakthi Institute of Engineering and Technology in Coimbatore, JPR Institute of Technology in Sriperumbudur and GH Raisoni College of Engineering in Nagpur. The other satellites were: SindhuNetra, an Indian technology demonstration satellite, SAI-1 NanoConnect-2, a technology demonstration satellite from the US, and 12 SpaceBEEs satellites for two-way satellite communications and data relay. “India and Isro feel extremely proud and honoured to launch Amazonia-1, the first satellite to be designed, integrated and operated by National Institute for Space Research, Brazil. The satellite is in good health and the solar panels have been deployed,” said Isro chairman K Sivan after PSLV placed Amazonia-1 in orbit. "We have planned 14 missions this year including seven launch missions, six satellite missions and the first unmanned mission by the end of this year," he said. Marcos Cesar Pontes, Brazil’s minister of science, technology and innovation who was present to witness the launch, said the satellite was the result of years of efforts by engineers at the National Institute for Space Research and the Brazilian Space Agency. “The launch represents a new era for Brazil satellite industry and development. This satellite has a very important mission for Brazil. It will monitor the country and the Amazon. It represents a new era of the Brazil satellite industry and development. This is one important step in the partnership between Brazil and India that is going to grow up. We are going to work together a lot. It is the beginning of our strong relationship,” he said. The launch was the 53rd flight of PSLV and the 78th launch vehicle mission from Sriharikota spaceport. PSLV-C51 was also the third flight using the ‘DL’ variant, which means the rocket was equipped with two solid strap-on boosters. PSLV-C51/Amazonia-1 mission was the first dedicated PSLV commercial mission for NewSpace India Limited (NSIL), a government of India company under the department of space. Isro said NSIL undertook the mission under a commercial arrangement with Spaceflight Inc, US. With this mission, Isro has launched 342 foreign satellites from 34 countries. source: PSLV-C51/Amazonia 1 launch: Isro places Brazilian satellite in orbit | India News - Times of India (indiatimes.com)
    1 point
  12. February 18, 2021 Each year more than 180 million tons of dust blow out from North Africa, lofted out of the Sahara Desert by strong seasonal winds. Perhaps most familiar are the huge, showy plumes that advance across the tropical Atlantic Ocean toward the Americas. But the dust goes elsewhere, too—settling back down in other parts of Africa or drifting north toward Europe. A dramatic display of airborne dust particles (above) was observed on February 18, 2021, by the Visible Infrared Imaging Radiometer Suite (VIIRS) on the NOAA-20 spacecraft. The dust appears widespread, but particularly stirred up over the Bodélé Depression in northeastern Chad. The image below, also acquired on February 18, shows the scale of the plume in relation to continents bordering the Atlantic Ocean. It was acquired by the NASA’s Earth Polychromatic Imaging Camera (EPIC) on NOAA’s DSCOVR satellite. While much of the plume appears west of Africa, a tendril of dust can be seen riding the winds toward Europe. According to a story by research meteorologist Marshall Shepherd, strong and persistent winds from the south drive Saharan dust toward Europe at least a few times a year. Forecasts from the Copernicus Atmosphere Monitoring Service indicated that most of the dust reaching Europe this weekend will likely be concentrated over Spain and France, but some may carry as far north as Norway. Parts of Spain might see “mud rain,” as the approaching dust plume combines with a weather front. The mid-February dust storm follows an intense event earlier in the month over southern and central Europe. Saharan dust from that storm coated the snow on the Pyrenees and Alps and turned skies orange in France. Dust can degrade air quality and accelerate the melting of snow cover. But it also plays a major role in Earth’s climate and biological systems, absorbing and reflecting solar energy and fertilizing ocean ecosystems with iron and other minerals that plants and phytoplankton need to grow source: https://earthobservatory.nasa.gov
    1 point
  13. Google Maps for Android is one of the most actively developed Google apps, with new features and improvements routinely being added to the navigation app. In the last two months alone, the app has gained quite a few functionalities, including a new community feed, a Go tab for accessing frequently visited places, messaging for verified businesses, a new driving mode, and food delivery alerts. The app will also soon start showing COVID-19 vaccine locations in the US. Now the Google Maps on Android is picking up a new split-screen UI that makes it easier to navigate in the Street View mode. This feature has long been available on the Google Map’s web version, but it’s only now making its way to smartphones. As first spotted by Reddit user /u/p3nsive (via 9to5Google), the new UI launches automatically when you drop a pin on the map and enter the Street View mode. The Google Maps’ screen splits in half, with the upper half of the screen occupied by the Street View interface and the corresponding map shown in the bottom half. The Street View path is indicated in blue on the map, and there’s a Telegram logo-like indicator that shows your current position. You can also open the Street View mode in full screen and return to the split-screen with a simple tap. In the old UI, it was easy to get lost and roam around aimlessly. The new split-screen UI gives you a much better understanding of exactly where you’re on the map and makes it easier to navigate your way around in the Street View mode. The new split-screen UI with Street View is rolling out via a server-side switch on the latest version of Google Maps for Android. It was available on our phone running Google Maps v10.59.1. At the time being, the new split-screen UI is only rolling out to Android devices. source: Google Maps makes it easier to navigate with a split screen Street View UI (xda-developers.com)
    1 point
  14. You’d be forgiven for thinking that receiving data transmissions from orbiting satellites requires a complex array of hardware and software, because for a long time it did. These days we have the benefit of cheap software defined radios (SDRs) that let our computers easily tune into arbitrary frequencies. But what about the software side of things? As [Dmitrii Eliuseev] shows, decoding the data satellites are beaming down to Earth is probably a lot easier than you might think. Well, at least in this case. The data [Dmitrii] is after happens to be broadcast from a relatively old fleet of satellites operated by the National Oceanic and Atmospheric Administration (NOAA). These birds (NOAA-15, NOAA-18 and NOAA-19) are somewhat unique in that they fly fairly low and utilize a simple analog signal transmitted at 137 MHz. This makes them especially good targets for hobbyists who are just dipping their toes into the world of satellite reception. [Dmitrii] doesn’t spend a lot of time talking about the hardware in this post, only to say that he’s using a SDRPlay with what he describes as a poor antenna. He provides a link for information on building a more suitable antenna, but the signal is strong enough that an old set of “Rabbit Ears” will do in a pinch. From there he goes over how you can predict when one of the NOAA birds will be passing overhead, and explains how to configure your SDR software to capture the resulting signal. From there, it’s a step-by-step guide on how to make sense of the recorded WAV file. With the help of the scipy library, it’s surprisingly easy to load the WAV file and generate some visualizations of the signal within. Since it’s analog, it only takes a bit more work with the Python Imaging Library (PIL) to convert that into a 2D image. [Dmitrii] notes that using the putpixel function isn’t the most efficient way to do this, and gives some tips on how you could speed up the process greatly, but for the purposes of the demonstration it makes for more easily understood code. Of course, there are already mature software packages that will decode this data for you. But there’s something to be said for doing it yourself, especially since these NOAA satellites won’t be around forever. The new satellites that replace them will certainly be using a more complex protocol, so the clock is ticking if you want to try your hand at this unique programming exercise. source: Decoding NOAA Satellite Images In Python | Hackaday
    1 point
  15. gdf.to_crs(epsg:4326) worked in the end. Had to spend a while troubleshooting my environment and geopandas' dependencies to get this to work. Thank you!!
    1 point
  16. Yes, I want to add a point in existing Postgres table but via qgis (digitize it) by adding exact coordinates (XY) values. In ArcGIS, I can do that by using Absolute XY (F6). I know how to do it in qgis by using a plugin name LatLon Tools. The issues is those latlon tools cannot be used to add coordinate when when the data is called from the Postgres database.
    1 point
  17. Flow maps are cartographic visualizations to show the movement of objects, people, or other living things from one location to another. Lines, usually symbolized with an arrow to indicate the direction. Color coding or line width can also then be used to indicate the volume of objects that are moving from one location to another. Airline traffic, animal migration, commuters, and import/exports are all common types of geographic data that are typically shown on a flow map . Ilya Boyandin has developed an easy-to-use online tool called Flowmap.blue that takes location data stored in Google Sheets to visualize interactive flow maps. Built using flowmap.gl, deck.gl, mapbox, d3, blueprint, CARTOColors, Flowmap.blue is a browser-based tool that lets users visualize and animate the movement of geographic data between locations. User can quickly set up their own custom flow maps by copying the template spreadsheet and adding in their own location data. The site also hosts plenty of examples that showcase how flowmap.blue was used to map out commuter trips, bicycle rides and sharing trips, human migration, animal migration patterns, and modeling a sewer system. site: https://flowmap.blue/
    1 point
  18. The Sea of Galilee (also known as the Sea of Tiberias, Lake Tiberias, Lake of Gennesaret, and Lake Kinneret), located in northeast Israel, is the world’s lowest freshwater lake. After the Dead Sea, the Sea of Galilee is the world’s second-lowest lake in the world. The lake measures 21 kilometers (13 miles) north-south, and it is only 43 meters (141 feet) deep. Most of the inflow of water to the Sea of Galilee arrives via the Jordan River from the north, although some underground springs drain into the lake. Water levels have been dropping for the past two decades, reaching a nearly all-time low in 2018. For the winter of 2019, the Sea of Galilee was measured at 212 meters below sea level, not far from the all-time low of 214.87 meters below sea level measured in 2001. As water levels have dropped, the lake has become saltier, impacting its ability to supply drinking water. Saltier waters also impacts the fish population and encourages algae blooms. Two wet winters since 2018 have brought the water level to 209.9 meters (688.6 feet) below sea level as measured on December 16, 2020. This brings the Sea only 1.105 meters from the maximum level of 208.80 meters below sea level. This Operational Land Imager (OLI) on Landsat 8 captured this false-color image (bands 6-5-4) of the lake and its surrounding landscape on October 27, 2020. The neon green land surrounding the Sea of Galilee and to the south is mostly farmland converted from marshy floodplains. Diversion of water for agricultural purposes is one of the reasons that researchers believe is responsible for the lowering water levels of the lake. source: https://www.geographyrealm.com/sea-of-galilee-water-levels/
    1 point
  19. On November 21st, Sentinel-6 was launched by the European Space Agency (ESA) using a SpaceX Falcon 9 rocket. The satellite was developed in partnership with NASA and other agencies and was named after former NASA Earth Science Division Director Michael Freilich.[1] The recent launch of Sentinel-6 Michael Freilich/Jason CS is the latest mission that will collect sea level measurements at more frequent intervals to monitor the effects of climate change on our oceans. With one of the biggest threats of climate change being the rise of ocean levels, this represents an important tool for scientists planning and mapping the Earth’s response to climate change. Current sea levels are rising at a rate of 3.6 centimeters per decade. As this rise continues to accelerate, measuring the height of the ocean is a key component of understanding the effects of climate change. The Sentinel-6 satellite is an altimeter-focused instrument, representing a continuity of satellite systems used to monitor oceans since the early 1990s. Sentinel-6 will serve as the reference mission for sea-surface height measurement. Intended to be a two-satellite mission, the second Sentinel-6 satellite is scheduled to launch in 2025 to complement the current system. Additional instruments onboard the satellites also be used to provide weather data, including atmospheric data to improve climate models and hurricane tracking.[2] In 1992, the mission of measuring sea-level rise began with the TOPEX/Poseidon satellite and continued with Jason-1 (2001), OSTM/Jason-2 (2008), and, in 2016, Jason-3. The launch of the Sentinel-6 satellites will ensure a continuity in sea level observations out to 2030. Sentinel-6 has both a low-resolution and a high-resolution observing mode to ensure compatibility with prior missions. Sentinel-6 carries the European Poseidon-4 altimeter that uses, for the first time, a synthetic aperture radar capability to the altimeter reference mission time series. This will allow measurements of sea levels to be within a few centimeters. Data also allow the collection of high resolution vertical profiles of temperature data, where a GNSS Radio-Occultation sounding technique is used to monitor change in the temperature profile. This makes it able to take measurements in the troposphere and stratosphere, including the modeling of these profiles for forecasting purposes. [3] Measuring Sea Level Rise Sentinel-6 continues a record of nearly thirty years of data that have already measured a progressive rise of sea levels. Since the French-US Topex/Poseidon mission, precision now allows us to make realistic averages on how rapidly sea levels are changing. Back in the 1990s, sea level change showed about a 3.1 mm/year increase. In the 2000s, it increased 3.6 mm/year. Between 2013-2018, the rate increased to increased to 4.8 mm/year. Scientists have observed the fact that this increasing rate is not uniform across the globe, with some regions appearing to have more rapid increase. Instruments from Sentinel-6 will, therefore, be important because it can show precise levels of sea level change across different regions, allowing scientists to analyze where the greatest threats to sea level change are most likely to be. With the majority of the planet’s population living near coastal regions, forecasting where regions are likely to face near- and long-term threats will be crucial as adaptation to climate change will increasingly be paramount in global policies.[ Data should become available within three to three and a half months of launch, with near real-time low resolution data made available. A dedicated FTP server will be provided to enable scientists around the world to utilize the data as they become available.[5] The Sentinel-6 is the next generation of sea level monitoring altimeter monitoring tools that have increasingly improved accuracy since the 1990s. Scientists now can monitor sea level change within centimeters of their true accuracy across the globe. Data we have clearly show an accelerating level of sea level change and it is expected that Sentinel-6 will not only confirm this but also better forecast change for different regions.
    1 point
  20. all done, enjoy and don't forget, please more active
    1 point
  21. To meet the most ambitious 1.5º C climate goal requires a rapid phase-out of fossil fuels and mass use of renewables. However, new international research by the Institute of Environmental Science and Technology of the Universitat Autònoma de Barcelona (ICTA-UAB) warns that green energy projects can be as socially and environmentally conflictive as fossil fuel projects. While renewable energies are often portrayed as being environmentally sustainable, this new study cautions about the risks associated with the green energy transition, arguing for an integrated approach that redesigns energy systems in favor of social equity and environmental sustainability. The research, which analyzes protests over 649 energy projects, has been recently published in the journal Environmental Research Letters. The study, authored by an international group of researchers with a large presence of the ICTA-UAB and led by Dr. Leah Temper, from McGill University, draws on data from the Global Atlas of Environmental Justice (EJAtlas), an online database by ICTA-UAB that systematizes over 3000 ecological conflicts. The research examines what energy projects are triggering citizen mobilizations, the concerns being expressed as well as how different groups are impacted, and the success of these movements in stopping and modifying projects. The study finds that conflicts over energy projects disproportionately impact rural and indigenous communities and that violence and repression against protesters was rife, with assassination of activists occurring in 65 cases, or 1 out of 10 cases studied. However, the study also points to the effectiveness of social protest in stopping and modifying energy projects, finding that over a quarter of projects facing social resistance turn out to be either canceled, suspended, or delayed. Furthermore, it highlights how communities engage in collective action as a means of shaping energy futures and make claims for localization, democratic participation, shorter energy chains, anti-racism, climate-justice-focused governance, and Indigenous leadership. According to Dr. Temper, "the study shows that the switch from fossil fuels to green energy is not inherently socially and environmentally benign and demonstrates how communities are standing up to demand a say in energy systems that works for them. These results call for action to ensure that the costs of decarbonization of our energy system do not fall on the most vulnerable members of our society." The study urges climate and energy policymakers to pay closer attention to the demands of collective movements to meaningfully address climate change and to move towards a truly just transition. The study finds that amongst low-carbon energy projects, hydropower is the most socially and environmentally damag-ing, leading to mass displacement and high rates of violence. Out of the 160 cases of hydropower plants from 43 coun-tries studied, almost 85% of the cases are either high or medium intensity. Indigenous peoples are particularly at risk and are involved in 6 out of 10 cases. Co-author Dr. Daniela Del Bene, from ICTA-UAB, urges caution around large-scale renewables. "The case of hydropower dams shows that even less carbon-emitting technologies can cause severe im-pacts and lead to intense conflicts, including violence and assassinations of opponents. The energy transition is not only a matter of what technology or energy source to use but also of who controls and decides upon our energy systems", she says. On the other hand, wind, solar, and geothermal renewable energy projects, were the least conflictive and involved lower levels of repression than other projects. According to co-author Sofia Avila, "conflicts around mega wind and solar power infrastructures are not about "blocking" climate solutions but rather about "opening" political spaces to build equitable approaches towards a low-carbon future. For example, in Mexico, long-lasting claims of injustice around an ambitious Wind Power Corridor in Oaxaca has spearheaded citizen debates around a just transition, while different proposals for cooperative and decentralized energy production schemes are emerging in the country." According to Prof. Nicolas Kosoy, from McGill University, "participation and inclusiveness are key to resolving our socio-environmental crises. Both green and brown energy projects can lead to ecological devastation and social exclusion if local communities and ecosystems rights continue to be trampled upon." The study argues that place-based mobilizations can point the way towards responding to the climate crisis while tackling underlying societal problems such as racism, gender inequality, and colonialism. According to Dr. Temper, addressing the climate crisis calls for more than a blind switch to renewables. Demand-side reduction is necessary but this needs to work in tandem with supply side approaches such as moratoria, and leaving fossil fuels in the ground are necessary. "Equity concerns need to be foremost in deciding on unminable and unburnable sites. Instead of creating new fossil fuel and green sacrifices zones, there is a need to engage these communities in redesigning just energy futures", she says. sumber: https://phys.org/news/2020-12-scientists-social-environmental-tied-energy.html
    1 point
  22. there is also a new tool in arcgis pro called "Pixel Editor" if you need to correct the values for this tool you need the Image analyst license. https://pro.arcgis.com/en/pro-app/help/analysis/image-analyst/editing-elevation-pixels.htm depending on your task or your licence you can also use the "Raster Calculator" (needs spatial analysis). For example if you have a polygon with the elevation data and you want to substitute the values on your raster (polygon to raster, then use the conditional function > con() https://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-analyst-toolbox/con-.htm
    1 point
  23. In some ways, drilling into Antarctica’s ancient ice is easier than interpreting it. Today, more than 2 years after presenting the discovery of the world’s oldest ice core, scientists have published an analysis of the 2.7-million-year-old sample. One surprising finding: Air bubbles from 1.5 million years ago—from a time before the planet’s ice age cycles suddenly doubled in length—contain lower than expected levels of carbon dioxide (CO2), a possible clue to the shift in the ice age cycle. The CO2 levels are “amazingly low,” says Yige Zhang, a paleoclimatologist at Texas A&M University in College Station. He adds that the study, published today in Nature, is “quite interesting” because it reports the first direct measurements of atmospheric gases from that mysterious time. Some 2.6 million years ago, Earth entered a time known as the Pleistocene, which saw the planet swing in and out of deep periods of glaciation at regular 40,000-year intervals. About 1 million years ago, during what’s called the Mid-Pleistocene transition, these ice age cycles went from occurring every 40,000 years to 100,000 years. (The most recent ice age ended 11,000 years ago.) Scientists have long known that tiny changes in Earth’s orbit, called Milankovitch cycles, drive the planet in and out of these ice ages. But nothing changed in orbital patterns 1 million years ago that would have driven the “flip.” Some scientists suspect that overall CO2 levels were higher in the 40,000-year world, but declined over time and cooled the planet, eventually reaching a point where Earth transitioned into deeper, longer freezes every 100,000 years. One way to check that theory would be to examine samples of Earth’s atmosphere from before the flip. But before the discovery of the new ice core, the oldest greenhouse gases one could measure were in trapped bubbles in ice dating to about 800,000 years ago. To reach further back in time, a team of scientists targeted so-called “blue ice” near Antarctica’s surface in the Allan Hills. Here, ancient ice flows have exhumed the oldest ice from the deep. Old ice layers are driven up from below, while wind strips away snow and younger ice. Paul Mayewski, a glaciologist at the University of Maine in Orono, suspected such ice could be ancient, and Michael Bender, a geochemist at Princeton University, developed a way to date chunks of ice directly from trace amounts of argon and potassium gases they contain. In 2015, a team led by John Higgins, a Princeton geochemist, excavated the record-setting core. At first, the oldest ice seemed to contain startling levels of CO2, several times the 407 parts per million (ppm) we see today, says Yuzhen Yan, the Princeton geochemist who led the new study. Further analysis, however, revealed the bubbles had been contaminated by CO2 percolating from beneath the ice, likely released by microbes. That meant the team had to toss out data from many of the oldest samples—a reflection of their conscientiousness, says Bärbel Hönisch, a geochemist at Columbia University. “The authors had to do a lot of work to convince themselves of what they’re actually seeing.” When the team looked at CO2 levels from 1.5 million years ago, they found them on average quite similar to the postflip world, swinging between 204 and 289 ppm, depending on whether the world was in an ice age or not. “It’s surprising,” Yan says, given broad evidence that the world was warmer in the early Pleistocene, before the ice ages grew deeper. “The educated guess is you’d have higher CO2 to achieve that. But that’s not something we see.” That means that something other than a long-term CO2 decline was likely driving the cooling, says Peter Clark, a glaciologist at Oregon State University in Corvallis. One such driver could be the cumulative buildup of ice across the Northern Hemisphere; more ice would leave the world more arid, for example, allowing iron-rich dust to fertilize ocean microbes, encouraging them to absorb more CO2 from the atmosphere during glacial times. Clark has long advanced a hypothesis that repeated glaciations gradually scoured away soil and other loose grit that would have prevented ice from “sticking” to bedrock. Once that grit was gone, the anchored ice sheets could thicken and grow to a tipping point, sending the planet into 100,000-year cycles. Other subtleties in the ice’s CO2 levels point to other possible mechanisms. When the planet transitioned to 100,000-year ice ages, for example, levels of CO2 dropped on average 24 ppm lower during glacial periods compared with similar events in the previous era. That suggests the world was quite sensitive to CO2 and could have “flipped” thanks to something like a small disruption in the currents that drive carbon storage in the ocean—perhaps caused by ice sheet growth or something else, Hönisch says. Given its limitations, including a small amount of material collected by a narrow drill, the Allan Hills core is unlikely to settle debate on the ice age transition. However, its data are helping calibrate other, indirect methods of measuring ancient CO2, like using isotopic shifts in single-celled foraminifera fossils. As those methods have improved, their estimates have lined up with the new findings. “It’s a wonderful confirmation that the proxies are really working,” Hönisch says. Meanwhile, the team hasn’t stopped its exploration of the blue ice. “It’s conceivable that there’s ice as old, or even older, out there,” Yan says. Next month, a team led by Higgins will arrive in Antarctica to hunt for it. And this time, they’re bringing a bigger drill. source: https://www.sciencemag.org/news/2019/10/world-s-oldest-ice-core-could-solve-mystery-flipped-ice-age-cycles
    1 point
  24. A new study shows that increased heat from Arctic rivers is melting sea ice in the Arctic Ocean and warming the atmosphere. The study published this week in Science Advances was led by the Japan Agency for Marine-Earth Science and Technology, with contributing authors in the United States, United Arab Emirates, Finland and Canada. According to the research, major Arctic rivers contribute significantly more heat to the Arctic Ocean than they did in 1980. River heat is responsible for up to 10% of the total sea ice loss that occurred from 1980 to 2015 over the shelf region of the Arctic Ocean. That melt is equivalent to about 120,000 square miles of 1-meter thick ice. "If Alaska were covered by 1-meter thick ice, 20% of Alaska would be gone," explained Igor Polyakov, co-author and oceanographer at the University of Alaska Fairbanks' International Arctic Research Center and Finnish Meteorological Institute. Rivers have the greatest impact during spring breakup. The warming water dumps into the ice-covered Arctic Ocean and spreads below the ice, decaying it. Once the sea ice melts, the warm water begins heating the atmosphere. The research found that much more river heat energy enters the atmosphere than melts ice or heats the ocean. Since air is mobile, this means river heat can affect areas of the Arctic far from river deltas. The impacts were most pronounced in the Siberian Arctic, where several large rivers flow onto the relatively shallow shelf region extending nearly 1,000 miles offshore. Canada's Mackenzie River is the only river large enough to contribute substantially to sea ice melt near Alaska, but the state's smaller rivers are also a source of heat. Polyakov expects that rising global air temperatures will continue to warm Arctic rivers in the future. As rivers heat up, more heat will flow into the Arctic Ocean, melting more sea ice and accelerating Arctic warming. Rivers are just one of many heat sources now warming the Arctic Ocean. The entire Arctic system is in an extremely anomalous state as global air temperatures rise and warm Atlantic and Pacific water enters the region, decaying sea ice even in the middle of winter. All these components work together, causing positive feedback loops that speed up warming in the Arctic. "It's very alarming because all these changes are accelerating," said Polyakov. "The rapid changes are just incredible in the last decade or so." Authors of the paper include Hotaek Park, Eiji Watanabe, Youngwook Kim, Igor Polyakov, Kazuhiro Oshima, Xiangdong Zhang, John S. Kimball and Daqing Yang. source: https://www.sciencedaily.com/releases/2020/11/201107133922.htm
    1 point
  25. The International Maritime Organization (IMO) has issued a resolution for maritime cyber-risk management, effective January 2021. IMO Resolution MSC.428(98) affirms that maritime operators need to address cyber threats that risk the integrity and availability of technology systems. GPS/GNSS signal jamming and spoofing expose the vulnerabilities of PNT-reliant systems. The single point of failure in the signals used to synchronize military operations or determine a vessel’s location leaves maritime systems open to attack. With resilient PNT, maritime and naval vessels can rely on trusted data. Remote Operations at Sea. In September, Orolia participated in a Remotely Operated Service at Sea (ROSS) demonstration where an unmanned vessel was tele-operated from more than 800 kilometers (500 miles) away. With its SecureSync Interference Detection and Mitigation (IDM) suite, Orolia provided the project’s PNT cybersecurity package and delivered precise, reliable data for the control center to pilot the vessel from afar. The IDM suite includes GNSS threat detection and mitigation, as well as the option to include encrypted and alternative signals for use in GNSS-denied environments. After this successful demonstration, SeaOwl Group, the company leading the ROSS project, obtained the first remotely operated vessel navigation license in France. Diving Deep. Atomic clocks and oscillators are useful for underwater operations where RF signals are unavailable to provide accurate PNT data. Precision timing technologies, such as Orolia’s Spectratime mRO-50 oscillator, ensure stable timing for navigation systems through radar. They support missions such as: stabilizing and synchronizing sensor measurement data collection for autonomous underwater vehicles (AUVs) providing holdover to maintain precise positioning on submarines during extended periods of GNSS signal denial generating precise frequencies with low phase noise and less burden on radio receiver architecture, such as search-and-rescue control centers operating with low power consumption and increasing the reliability of radio reception. Resilient PNT is essential at sea, from military missions and commercial freight shipping to port management, search and rescue, research and fishing operations. Jamming and spoofing detection, threat mitigation, and alternative PNT sources configured in multiple layers of protection can ensure continuous operations, even in compromised environments. In shallow or deep-water environments, Orolia’s portfolio includes critical infrastructure support for naval command-and-control centers, essential GNSS vulnerability testing and services, and wearable solutions that fit in the palm of a hand. source: https://www.gpsworld.com/resilient-pnt-critical-to-maritime-advancement/
    1 point
  26. I’ve seen similar results in SAGA GIS using the “Multi Direction Lee Filter” tool. http://www.saga-gis.org/saga_tool_doc/7.1.1/grid_filter_3.html
    1 point
  27. NGS has developed a new beta tool for obtaining geodetic information about a passive mark in their database. This column will highlight some features (available as of Oct. 5, 2020) that may be of interest to GNSS users. It provides all of the information about a station in a more user-friendly format. The box titled “Passive Mark Lookup Tool” is an example of the webtool. The tool provides a lot of information so I have separated the output of the tool into several boxes titled “Passive Mark Lookup Tool — A through D.” I will highlight several attributes that I believe will be very useful to users, especially users of leveling-derived and GNSS-derived orthometric heights. I’ve highlighted several attributes in the box titled “Passive Mark Lookup Tool — A” that are important to users such as published coordinates, their datum and source, Geoid18 value, GNSS Useable, and the date of last recovery. All of these values are available on a NGS datasheet but, in my opinion, this provides the information in a more user-friendly format. One calculation that the user can easily compute for marks that have been leveled to and occupied by GNSS equipment, is the difference between the published leveling-derived orthometric height and the computed GNSS-derived orthometric height. This may indicate that the mark has moved since the last time it was leveled to or that its height coordinate has been readjusted since the creation of the published geoid model. The table below provides the calculation using the data from the box titled “Passive Mark Lookup Tool — A.” The calculation [HGNSS = hGNSS — NGeoid18; Difference = HGNSS — HNAVD 88] has been described in several of my previous columns In this example, the difference between the GNSS-derived orthometric height and the Published NAVD 88 height is 6.1 cm. NGS is looking for comments on this beta webtool so if users would like this computation added to the tool, they should send a comment to NGS using the link provided on the site (This is a beta product. NGS is interested in your feedback concerning its function and usability as well as how users would like to interact with NGS datasheet information in the future. Email us at [email protected]) So, the user should ask the question, did the station move since the last time it was leveled? Another attribute that would be nice to be part of this tool is was the station used to create the hybrid geoid model. As of Oct. 5, 2020, users have to go to the Geoid18 webpage to get the information. The excel file and shapefiles provides whether the station was used to create the Geoid18 model. In the case of this example, KK1531, CHAMBERS, the mark was not used in the creation of Geoid18 so NGS felt that the station may have moved and/or the GPS on Bench Mark residual was large relative to its neighbors. See NGS’s technical report on Geoid18 for more information on the creation of Geoid18. The GPS on Bench Mark residual analysis was described in several of my previous columns (see “The differences between Geoid18 values and NAD 83, NAVD 88 values” and “NGS 2018 GPS on BMs program in support of NAPGD2022 — Part 6” for examples). The webtool provides a map depicting the location of the station, photos (if available), and previously published, superseded values of the mark. See the box titled “Passive Mark Lookup Tool — B.” https://www.gpsworld.com/wp-content/uploads/2020/10/zilkoski-beta-tool-column-image-2.jpg source: https://www.gpsworld.com/ngs-releases-beta-tool-for-obtaining-geodetic-information/
    1 point
  28. Earth is known as the “Blue Planet” due to the vast bodies of water that cover its surface. With an over 70% of our planet’s surface covered by water, ocean depths offer basins with an abundance of features, such as underwater plateaus, valleys, mountains and trenches. The average depth of the oceans and seas surrounding the continents is around 3,500 meters and parts deeper than 200 meters are called "deep sea". This visualization reveals Earth’s rich bathymetry, by featuring the ETOPO1 1-Arc Minute Global Relief Model. ETOPO1 integrates land topography and ocean bathymetry and provides complete global coverage between -90° to 90° in latitude and -180° to 180° in longitude. The visualization simulates an incremental drop of 10 meters of the water’s level on Earth’s surface. As time progresses and the oceans drain, it becomes evident that underwater mountain ranges are bigger in size and trenches are deeper in comparison to those on dry land. While water drains quickly closer to continents, it drains slowly in our planet’s deepest trenches. These trenches start to become apparent below 5,000 meters, as the majority of the oceans have been drained of water. In the Atlantic Ocean, there are two trenches that stand out. In the southern hemisphere, the South Sandwich trench is located between South America and Antarctica, while in the northern hemisphere the Puerto Rico trench in the eastern Caribbean is its deepest part. The majority of the world’s deepest trenches though are located in the Pacific Ocean. In the southern hemisphere, the Peru-Chile or Atacama trench is located off the coast of Peru and the Tonga Trench in the south-west Pacific Ocean between New Zealand and Tonga. In the northern hemisphere, the Philippines Trench is located east of the Philippines, and in the northwest Pacific Ocean we can see a range of trenches starting from the north, such as the Kuril-Kamchatka, and moving to the south all the way to Mariana’s trench that drains last. It is worth recalling that the altitude values of ETOPO1 range between 8,333 meters (topography) and -10,833 meters (bathymetry). This range of altitude values reflects the limitations of the visualization, since Challenger Deep - the Earth’s deepest point located at Mariana's trench - has been measured to a maximum depth of 10,910 meters and Mount Everest the highest peak above mean sea level is at 8,848 meters. In this visualization the vertically exaggerated by 60x ETOPO1 relief model, utilizes a gray-brown divergent colormap to separate the bathymetry from topography. The bathymetry is mapped to brownish hues (tan/shallow to brown/deep) and the dry land to greys (dark gray/low to white/high). A natural consequence of this mapping is that areas of the highest altitude are mapped to whitish hues, as they are almost always covered in snow. Furthermore, in an effort to help viewer’s eyes detect surface details that would otherwise be unnoticeable, the topography and bathymetry have been rendered with ambient occlusion - a shadowing technique that in this particular visualization darkens features and regions that present changes in altitude, such as mountains, ocean crevices and trenches. download: https://svs.gsfc.nasa.gov/vis/a000000/a004800/a004823/OceanDrain_Colorbar_1920x1080_30fps.mp4 https://svs.gsfc.nasa.gov/vis/a000000/a004800/a004823/OceanDrain_1920x1080_30fps.mp4 source: https://svs.gsfc.nasa.gov/4823
    1 point
  29. These are some of the most common geographic misconceptions that are both surprising and surprisingly hard to correct. https://www.nationalgeographic.com/culture/2018/11/all-over-the-map-mental-mapping-misconceptions/
    1 point
  30. geemap is a Python package for interactive mapping with Google Earth Engine (GEE), which is a cloud computing platform with a multi-petabyte catalog of satellite imagery and geospatial datasets. During the past few years, GEE has become very popular in the geospatial community and it has empowered numerous environmental applications at local, regional, and global scales. GEE provides both JavaScript and Python APIs for making computational requests to the Earth Engine servers. Compared with the comprehensive documentation and interactive IDE (i.e., GEE JavaScript Code Editor) of the GEE JavaScript API, the GEE Python API lacks good documentation and functionality for visualizing results interactively. The geemap Python package is created to fill this gap. It is built upon ipyleaflet and ipywidgets, enabling GEE users to analyze and visualize Earth Engine datasets interactively with Jupyter notebooks. geemap is intended for students and researchers, who would like to utilize the Python ecosystem of diverse libraries and tools to explore Google Earth Engine. It is also designed for existing GEE users who would like to transition from the GEE JavaScript API to Python API. The automated JavaScript-to-Python conversion module of the geemap package can greatly reduce the time needed to convert existing GEE JavaScripts to Python scripts and Jupyter notebooks. For video tutorials and notebook examples, please visit https://github.com/giswqs/geemap/tree/master/examples. For complete documentation on geemap modules and methods, please visit https://geemap.readthedocs.io/en/latest/source/geemap.html. Features Below is a partial list of features available for the geemap package. Please check the examples page for notebook examples, GIF animations, and video tutorials. Automated conversion from Earth Engine JavaScripts to Python scripts and Jupyter notebooks. Displaying Earth Engine data layers for interactive mapping. Supporting Earth Engine JavaScript API-styled functions in Python, such as Map.addLayer(), Map.setCenter(), Map.centerObject(), Map.setOptions(). Creating split-panel maps with Earth Engine data. Retrieving Earth Engine data interactively using the Inspector Tool. Interactive plotting of Earth Engine data by simply clicking on the map. Converting data format between GeoJSON and Earth Engine. Using drawing tools to interact with Earth Engine data. Using shapefiles with Earth Engine without having to upload data to one's GEE account. Exporting Earth Engine FeatureCollection to other formats (i.e., shp, csv, json, kml, kmz) using only one line of code. Exporting Earth Engine Image and ImageCollection as GeoTIFF. Extracting pixels from an Earth Engine Image into a 3D numpy array. Calculating zonal statistics by group (e.g., calculating land over composition of each state/country). Adding a customized legend for Earth Engine data. Converting Earth Engine JavaScripts to Python code directly within Jupyter notebook. Adding animated text to GIF images generated from Earth Engine data. Adding colorbar and images to GIF animations generated from Earth Engine data. Creating Landsat timelapse animations with animated text using Earth Engine. Searching places and datasets from Earth Engine Data Catalog. Using timeseries inspector to visualize landscape changes over time. Exporting Earth Engine maps as HTML files and PNG images. Searching Earth Engine API documentation within Jupyter notebooks. Installation To use geemap, you must first sign up for a Google Earth Engine account. geemap is available on PyPI. To install geemap, run this command in your terminal: pip install geemap geemap is also available on conda-forge. If you have Anaconda or Miniconda installed on your computer, you can create a conda Python environment to install geemap: conda create -n gee python conda activate gee conda install -c conda-forge geemap If you have installed geemap before and want to upgrade to the latest version, you can run the following command in your terminal: pip install -U geemap If you use conda, you can update geemap to the latest version by running the following command in your terminal: conda update -c conda-forge geemap Usage Important note: A key difference between ipyleaflet and folium is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only (source). Note that Google Colab currently does not support ipyleaflet (source). Therefore, if you are using geemap with Google Colab, you should use import geemap.eefolium. If you are using geemap with binder or a local Jupyter notebook server, you can use import geemap, which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). Youtube tutorial videos GitHub page of geemap Documentation While working on a small project I found this. This is a quite new library, some features shown in the tutorial may not work as intended but overall a very good package. The tools make the code much cleaner and readable. Searching EE docs from notebook is not yet implemented. Check out the youtube channel, it's great.
    1 point
  31. the best bet if you find free data programs, you submit a research and get the data, I saw once Tandem-X offer this programs. for now, a completely free DEM data stuck in 30 m resolution, anyone have another update maybe? hey tandem-x have a general proposal programs, maybe you can try it : https://tandemx-science.dlr.de/ scroll down
    1 point
  32. hi, why do you use landsat 7 and not landsat 8? and with witch software? here you can see the band combination for landsat 7 and 8 http://landsat.usgs.gov/L8_band_combos.php http://web.pdx.edu/~emch/ip1/bandcombinations.html so the band combination for TM is 1-4-7 in Landsat 7, the combination can be 7-6-4 http://www.harrisgeospatial.com/company/pressroom/blogs/tabid/836/artmid/2928/articleid/14305/the-many-band-combinations-of-landsat-8.aspx you load every single band in ArcGIS and you do a combination then do a classification (supervised or unsupervised) Two major categories of image classification techniques include unsupervised (calculated by software) and supervised (human-guided) classification. Unsupervised classification is where the outcomes (groupings of pixels with common characteristics) are based on the software analysis of an image without the user providing sample classes. The computer uses techniques to determine which pixels are related and groups them into classes. The user can specify which algorism the software will use and the desired number of output classes but otherwise does not aid in the classification process. However, the user must have knowledge of the area being classified when the groupings of pixels with common characteristics produced by the computer have to be related to actual features on the ground (such as wetlands, developed areas, coniferous forests, etc.). Supervised classification is based on the idea that a user can select sample pixels in an image that are representative of specific classes and then direct the image processing software to use these training sites as references for the classification of all other pixels in the image. Training sites (also known as testing sets or input classes) are selected based on the knowledge of the user. The user also sets the bounds for how similar other pixels must be to group them together. These bounds are often set based on the spectral characteristics of the training area, plus or minus a certain increment (often based on "brightness" or strength of reflection in specific spectral bands). The user also designates the number of classes that the image is classified into. Many analysts use a combination of supervised and unsupervised classification processes to develop final output analysis and classified maps. (source : https://articles.extension.org/pages/40214/whats-the-difference-between-a-supervised-and-unsupervised-image-classification)
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist