Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. me too, i can't get that file exercise & scripts
  3. Earlier
  4. Cheers: Some of you may have the answers to the exercises in Laura Tateosian's book, Python for ArcGis, or where you can get them. Thanks.
  5. This is first image from NASA’s James Webb Space Telescope is the deepest and sharpest infrared image of the distant universe to date. Known as Webb’s First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail. Thousands of galaxies – including the faintest objects ever observed in the infrared – have appeared in Webb’s view for the first time. This slice of the vast universe covers a patch of sky approximately the size of a grain of sand held at arm’s length by someone on the ground. President Joe Biden unveiled this image of galaxy cluster SMACS 0723 during a White House event Monday, July 11. NASA The $10bn James Webb Space Telescope (JWST), launched on 25 December last year, is billed as the successor to the famous Hubble Space Telescope. It will make all sorts of observations of the sky, but has two overarching goals. One is to take pictures of the very first stars to shine in the Universe more than 13.5 billion years ago; the other is to probe far-off planets to see if they might be habitable. One of the topics to be discussed will touch on that other overarching goal: the study of planets outside our Solar System. Webb has analysed the atmosphere of WASP-96 b, a giant planet located more than 1,000 light-years from Earth. It will tell us about the chemistry of that atmosphere. WASP-96 b orbits far too close to its parent star to sustain life. But, one day, it's hoped Webb might spy a planet that has gases in its air that are similar to those that shroud the Earth - a tantalising prospect that might hint at the presence of biology. BBC Watch the live event of the full image reveal live on YouTube.
  6. That's not all, this is a major release which requires NET 6. There's a new License Manager too 😉. ... but there will be workarounds. A new start page and a Learn Page. Geoprocessing tools will display an information tip letting you know when a selection or other filter is applied to input layers and the number of records that will be processed. You will also see tool parameter memory and autofill for commonly used tools accessed from the ribbon and context menus. Color Vision Deficiency Simulator. Then there is SAR toolset, Neo4S support to link charts and data from NoSQL, support more CAD and BIM formats etc.
  7. Highlights Now that you’re caught up to speed and ready to run the latest release of ArcGIS Pro, here are some of our favorite features that we are excited to bring you. Package Manager The Package Manager page allows you to manage conda environments for use within ArcGIS Pro. Formerly identified as the Python page, the Package Manager page now supports the upgrade of conda environments you’ve created in previous versions of ArcGIS Pro to the current version, the repair of broken environments, and the renaming of existing environments. Add maps to reports You can now add a map to a report. Maps that you add to the report header or footer are static. You can activate the map frame to adjust the map extent or scale. Maps that you add to a group header, group footer, or details subsection are dynamic. In the report view, the map frame of a dynamic map cannot be activated; however, the exported result updates in scale and extent to reflect the feature or features included in that subsection. Export presets You can create export presets for maps and layouts in ArcGIS Pro. Export presets save all the settings for a particular export type. When you export a map or layout, you can select a default preset or a custom preset you created. This allows for a faster and more consistent export experience. ArcGIS Knowledge If you have configured an ArcGIS Enterprise 11.0 Knowledge Server appropriately, you can create a new investigation and knowledge graph using a Neo4j database as a NoSQL data store. A new Geographic layout is available for link charts. Entities in the link chart are positioned on a map using their spatial geometry. Spatial data can also be added to the link chart and a basemap can be used to provide context for the knowledge graph’s spatial entities. source: https://www.esri.com/arcgis-blog/products/arcgis-pro/announcements/whats-new-in-arcgis-pro-3-0/
  8. Yes, a great drone but the problem is that DJI has yet to release the SDK for the Mavic 3 so 3rd party apps such as Drone Deploy won't work. So at the moment there is no auto-waypoint based method to fly mapping with this drone.
  9. It has started! Copilot has started charging $10/month after 60-day trial. ☹️
  10. nice update, My office use land cover a lot for fast technical advice for landslide, thank you for the heads up
  11. World Resources Institute and Google announced 10m resolution global land cover data called Dynamic World powered by Google Earth Engine and AI Platform. Dynamic World is a 10m near-real-time LULC dataset that includes nine classes and available for the Sentinel-2 L1C collection from 2015 until today.
  12. Hi I haven't logged in for a long time, I want to regularize my status. Thanks in advance
  13. The images above are released by the James Webb Space Telescope (JWST) team aren’t officially ‘first light’ images from the new telescope, but in a way, it feels like they are. These stunning views provide the initial indications of just how powerful JWST will be, and just how much infrared astronomy is about to improve. The images were released following the completion of the long process to fully focus the telescope’s mirror segments. Engineers are saying JWST’s optical performance is “better than the most optimistic predictions,” and astronomers are beside themselves with excitement. The astronomers and engineers actually seem astounded how good JWST’s resolution is turning out to be. The first official image of JWST will be released on July 12. https://scitechdaily.com/comparing-the-incredible-webb-space-telescope-images-to-other-infrared-observatories/
  14. ESRI Job Site: https://www.esri.com/en-us/about/careers/job-search
  15. For most uses, Google Maps is a flat, 2D app, and if your device can handle more graphics and a bit more data, you can fire up the Google Earth 3D data set and get 3D buildings. At Google I/O Google has announced a new level that turns the graphics slider way, way up on Google Maps: Immersive View. When exploring an area in Google Maps, the company says Immersive View will make it "feel like you’re right there before you ever set foot inside." The video for this feature is wild. It basically turns Google Maps into a 3D version of SimCity with AAA video game graphics. There are simulated cars that drive through the roads, and birds fly through the sky. Clouds pass overhead and cast shadows on the world. The weather is simulated, and water has realistic reflections that change with the camera. London even has an animated Ferris wheel that spins around. Google can't possibly be tracking things like the individual positions of birds (yet!), but a lot of this is real data. The cars represent the current traffic levels on a given street. The weather represents the actual weather, even for historical data. The sun moves in real time with the time of day. Another part of the video shows flying into a business that also has a whole 3D layout. All of this is possible thanks to combining the massive data sets from Google Maps, Google Earth, and Street View, but even then, this level of fidelity will be very limited by the initial data sets. Google says that at first, Immersive View will "start... rolling out in Los Angeles, London, New York, San Francisco, and Tokyo later this year with more cities coming soon." The company says that "Immersive view will work on just about any phone and device," but just like the 3D building mode, this will be an optional toggle.
  16. Massive earthquakes don’t just move the ground — they make speed-of-light adjustments to Earth’s gravitational field. Now, researchers have trained computers to identify these tiny gravitational signals, demonstrating how the signals can be used to mark the location and size of a strong quake almost instantaneously. It’s a first step to creating a very early warning system for the planet’s most powerful quakes, scientists report May 11 in Nature. Such a system could help solve a thorny problem in seismology: how to quickly pin down the true magnitude of a massive quake immediately after it happens, says Andrea Licciardi, a geophysicist at the Université Côte d’Azur in Nice, France. Without that ability, it’s much harder to swiftly and effectively issue hazard warnings that could save lives. As large earthquakes rupture, the shaking and shuddering sends seismic waves through the ground that appear as large wiggles on seismometers. But current seismic wave–based detection methods notoriously have difficulty distinguishing between, say, a magnitude 7.5 and magnitude 9 quake in the few seconds following such an event. That’s because the initial estimations of magnitude are based on the height of seismic waves called P waves, which are the first to arrive at monitoring stations. Yet for the strongest quakes, those initial P wave amplitudes max out, making quakes of different magnitudes hard to tell apart. But seismic waves aren’t the earliest signs of a quake. All of that mass moving around in a big earthquake also changes the density of the rocks at different locations. Those shifts in density translate to tiny changes in Earth’s gravitational field, producing “elastogravity” waves that travel through the ground at the speed of light — even faster than seismic waves. Such signals were once thought to be too tiny to detect, says seismologist Martin Vallée of the Institut de Physique du Globe de Paris, who was not involved in the new study. Then in 2017, Vallée and his colleagues were the first to report seeing these elastogravity signals in seismic station data. Those findings proved that “you have a window in between the start of the earthquake and the time at which you receive the [seismic] waves,” Vallée says. But researchers still pondered over how to turn these elastogravity signals into an effective early warning system. Because gravity wiggles are tiny, they are difficult to distinguish from background noise in seismic data. When scientists looked retroactively, they found that only six mega-earthquakes in the last 30 years have generated identifiable elastogravity signals, including the magnitude 9 Tohoku-Oki earthquake in 2011 that produced a devastating tsunami that flooded two nuclear power plants in Fukushima, Japan (SN: 3/16/11). (A P wave–based initial estimate of that quake’s magnitude was 7.9.) That’s where computers can come in, Licciardi says. He and his colleagues created PEGSNet, a machine learning network designed to identify “Prompt ElastoGravity Signals.” The researchers trained the machines on a combination of real seismic data collected in Japan and 500,000 simulated gravity signals for earthquakes in the same region. The synthetic gravity data are essential for the training, Licciardi says, because the real data are so scarce, and the machine learning model requires enough input to be able to find patterns in the data. Once trained, the computers were then given a test: Track the origin and evolution of the 2011 Tohoku quake as though it were happening in real time. The result was promising, Licciardi says. The algorithm was able to accurately identify both the magnitude and location of the quake five to 10 seconds earlier than other methods. This study is a proof of concept and hopefully the basis for a prototype of an early warning system, Licciardi says. “Right now, it’s tailored to work … in Japan. We want to build something that can work in other areas” known for powerful quakes, including Chile and Alaska. Eventually, the hope is to build one system that can work globally. The results show that PEGSNet has the potential to be a powerful tool for early earthquake warnings, particularly when used alongside other earthquake-detection tools, Vallée says. Still, more work needs to be done. For one thing, the algorithm was trained to look for a single point for an earthquake’s origin, which is a reasonable approximation if you’re far away. But close-up, the origin of a quake no longer looks like a point, it’s actually a larger region that has ruptured. If scientists want an accurate estimate of where a rupture happened in the future, the machines need to look for regions, not points, Vallée adds. Bigger advances could come in the future as researchers develop much more sensitive instruments that can detect even tinier quake-caused perturbations to Earth’s gravitational field while filtering out other sources of background noise that might obscure the signals. Earth, Vallée says, is a very noisy environment, from its oceans to its atmosphere. “It’s a bit the same as the challenge that physicists face when they try to observe gravitational waves,” Vallée says. These ripples in spacetime, triggered by colossal cosmic collisions, are a very different type of gravity-driven wave (SN: 2/11/16). But gravitational wave signals are also dwarfed by Earth’s noisiness — in this case, microtremors in the ground.
  17. you want to combine? use merge in arctoolbox, you can merge all. or if you want to do it manually, just start editing, and then do it one by one select 2 lines and then with using edit tool, click merge
  18. How to combine multiple line feature into a single feature? How can this be done in ArcMap? Can only be selected and combined. But how to merge automatically? It doesn't dissolve
  19. all done, please be more active and to all, active is not only login and silently lurking around, active is post some stuff or reply topic
  20. Please activate my account. Thank you in advance!!
  21. To Admin, Please reactivate my account. Thank you
  22. Governments and businesses across the world are pledging to adopt more sustainable and equitable practices. Many are also working to limit activities that contribute to climate change. To support these efforts, Esri, the global leader in location intelligence, in partnership with Impact Observatory and Microsoft, is releasing a globally consistent 2017–2021 global land-use and land-cover map of the world based on the most up-to-date 10-meter Sentinel-2 satellite data. In addition to the new 2021 data, 10-meter land-use and land-cover data for 2017, 2018, 2019, and 2020 is included, illustrating five years of change across the planet. This digital rendering of earth’s surfaces offers detailed information and insights about how land is being used. The map is available online to more than 10 million users of geographic information system (GIS) software through Esri’s ArcGIS Living Atlas of the World, the foremost collection of geographic information and services, including maps and apps. “Accurate, timely, and accessible maps are critical for understanding the rapidly changing world, especially as the effects of climate change accelerate globally,” said Jack Dangermond, Esri founder and president. “Planners worldwide can use this map to better understand complex challenges and take a geographic approach to decisions about food security, sustainable land use, surface water, and resource management.” Esri released a 2020 global land-cover map last year as well as a high-resolution 2050 global land-cover map, showing how earth’s land surfaces might look 30 years from now. With the planned annual releases, users will have the option to make year-to-year comparisons for detecting change in vegetation and crops, forest extents, bare surfaces, and urban areas. These maps also provide insights about locations with distinctive land use/land cover, as well as human activity affecting them. National government resource agencies use land-use/land-cover data as a basis for understanding trends in natural capital, which helps define land-planning priorities and determine budget allocations. Esri’s map layers were developed with imagery from the European Space Agency (ESA) Sentinel-2 satellite, with machine learning workflows by Esri Silver partner Impact Observatory and incredible compute resources from longtime partner Microsoft. The Sentinel-2 satellite carries a range of technologies including radar and multispectral imaging instruments for land, ocean, and atmospheres, enabling it to monitor vegetation, soil and water cover, inland waterways, and coastal areas. “World leaders need to set and achieve ambitious targets for sustainable development and environmental restoration,” said Steve Brumby, Impact Observatory cofounder and CEO. “Impact Observatory [and] our partners Esri and Microsoft are once again first to deliver an annual set of global maps at unprecedented scale and speed. These maps of changing land use and land cover provide leaders in governments, industry, and finance with a new AI [artificial intelligence]-powered capability for timely, actionable geospatial insights on demand.” Esri and Microsoft have released this 10-meter-resolution time-series map under a Creative Commons license to encourage broad adoption and ensure equitable access for planners working to create a more sustainable planet. Users can manipulate the map layers and other data layers with GIS software to create more dynamic visualizations. In addition to being freely available in ArcGIS Online as a map service, these resources are also available for download and viewing. To explore the new 2021 global land-use/land-cover map, visit livingatlas.arcgis.com/landcover.
  23. Dear Admin, requesting to reactivate my account. Thank you.
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist