Jump to content

All Activity

This stream auto-updates

  1. Last week
  2. When two Finnair planes flying into Estonia recently had to divert in quick succession and return to Helsinki, the cause wasn’t a mechanical failure or inclement weather—it was GPS denial. GPS denial is the deliberate interference of the navigation signals used and relied on by commercial aircraft. It’s not a new phenomenon: The International Air Transport Association (IATA) has long provided maps of regions where GPS was routinely unavailable or untrusted. However, concern is growing rapidly as conflict spreads across Europe, the Middle East, and Asia, and GPS jamming and spoofing become weapons of economic and strategic influence. Several adversarial nations have been known to use false (spoofed) GPS signals to interfere with air transit, shipping, and trade, or to disrupt military logistics in conflict zones. And recent discussions of anti-satellite weapons renewed fears of deliberate actions designed to wreak economic havoc by knocking out GPS. GPS has become so ubiquitous in our daily lives that we hardly think about what happens when it’s not available A GPS outage would result in many online services becoming unavailable (these rely on GPS-based network synchronization), failure of in-vehicle Satnav, and no location-based services on your mobile phone. Analyses in the U.S. and U.K. have both identified the temporary economic cost of an outage at approximately $1 billion per day—but the strategic impacts can be even more significant, especially in a conflict. The saying is that infantry wins battles, but logistics wins wars. It’s almost unimaginable to operate military logistics supply chains without GPS given the heavy reliance on synchronized communications networks, general command and control, and vehicle and materiel positioning and tracking. All of these centrally rely on GPS and all are vulnerable to disruption. Most large military and commercial ships and aircraft carry special GPS backups for navigation because there was, in fact, a time before GPS GPS is not available in all settings—underground, underwater, or at high latitudes. The GPS alternatives rely on signals that can be measured locally (for instance, motion or magnetic fields as used in a compass), so a vessel can navigate even when GPS is unavailable or untrusted. For example, inertial navigation uses special accelerometers that measure vehicle motion, much like the ones that help your mobile phone reorient when you rotate it. Measuring how the vehicle is moving and using Newton’s laws allows you to calculate your likely position after some time. Other “alt-PNT” approaches leverage measurements of magnetic and gravitational fields to help navigate against a known map of these variations near the Earth’s surface. Plus, ultrastable locally deployed clocks can ensure communications networks remain synchronized during GPS outages (comms networks typically rely on GPS timing signals to remain synchronized). Nonetheless, we rely on GPS because it’s simply much better than the backups. Focusing specifically on positioning and navigation, achieving good performance with conventional alternatives typically requires you to significantly increase system complexity, size, and cost, limiting deployment options on smaller vehicles. Those alternative approaches to navigation are also unfortunately prone to errors due to the instability of the measurement equipment in use—signals just gradually change over time, with varying environmental conditions, or with system age. We keep today’s alternatives in use to provide a backstop for critical military and commercial applications, but the search is on for something significantly better than what’s currently available. That something looks to be quantum-assured navigation, powered by quantum sensors. Quantum sensors rely on the laws of nature to access signatures that were previously out of reach, delivering both extreme sensitivity and stability As a result, quantum-assured navigation can deliver defense against GPS outages and enable transformational new missions. The most advanced quantum-assured navigation systems combine multiple sensors, each picking up unique environmental signals relevant to navigation, much the way autonomous vehicles combine lidar, cameras, ultrasonic detectors, and more to deliver the best performance. This starts with a new generation of improved quantum inertial navigation, but quantum sensing allows us to go further by accessing new signals that were previously largely inaccessible in real-world settings. While it may be surprising, Earth’s gravity and magnetic fields are not constant everywhere on the planet’s surface. We have maps of tiny variations in these quantities that have long been used for minerals prospecting and even underground water monitoring. We can now repurpose these maps for navigation. We’re building a new generation of quantum gravimeters, magnetometers, and accelerometers—powered by the quantum properties of atoms to be sensitive and compact enough to measure these signals on real vehicles. The biggest improvements come from enhanced stability. Atoms and subatomic particles don’t change, age, or degrade—their behavior is always the same. That’s something we are now primed to exploit. Using a quantum-assured navigation system, a vehicle may be able to position itself precisely even when GPS is not available for very long periods. Not simply hours or days as is achievable with the best military systems today, but weeks or months. In quantum sensing, we have already achieved quantum advantage—when a quantum solution decidedly beats its conventional counterparts. The task at hand is now to take these systems out of the lab and into the field in order to deliver true strategic advantage. That’s no mean feat. Real platforms are subject to interference, harsh conditions, and vibrations that conspire to erase the benefits we know quantum sensors can provide. In recent cutting-edge research, new AI-powered software can be used to deliver the robustness needed to put quantum sensors onto real moving platforms. The right software can keep the systems functional even when they’re being shaken and subjected to interference on ships and aircraft. To prevent a repeat of the Finnair event, real quantum navigation systems are now starting to undergo field testing. Our peers at Vector Atomic recently ran maritime trials of a new quantum optical clock. The University of Birmingham published measurements with a portable gravity gradiometer in the field. At Q-CTRL, we recently announced the world’s first maritime trial of a mobile quantum dual gravimeter for gravity map matching at a conference in London. My team is excited to now work with Airbus, which is investigating software-ruggedized quantum sensors to provide the next generation of GPS backup on commercial aircraft. Our full quantum navigation solutions are about to commence flight safety testing with the first flights later in the year, following multiple maritime and terrestrial trials. With a new generation of quantum sensors in the field, we’ll be able to ensure the economy keeps functioning even in the event of a GPS outage. From autonomous vehicles to major shipping companies and commercial aviation, quantum-assured navigation is the essential ingredient in providing resilience for our entire technology-driven economy.
  3. Earlier
  4. A Falcon 9 successfully launched an Earth science mission for Europe and Japan May 28 as part of the European Space Agency’s ongoing, if temporary, reliance on SpaceX for space access. The Falcon 9 lifted off from Vandenberg Space Force Base in California at 6:20 p.m. Eastern. The payload, the Earth Cloud Aerosol and Radiation Explorer (EarthCARE) spacecraft, separated from the upper stage about 10 minutes after liftoff. Simonetta Cheli, director of Earth observation programs at ESA, said in a post-launch interview that controllers were in contact with the spacecraft. “It is all nominal and on track.” Spacecraft controllers will spend the weeks and months ahead checking out the spacecraft’s instruments and calibrating them, she said. That will allow the first release of science data from EarthCARE around the end of this year or early next year. EarthCARE is an 800-million-euro ($870 million) ESA-led mission to study clouds and aerosols in the atmosphere. The spacecraft carries four instruments, including a cloud profiling radar provided by the Japanese space agency JAXA at a cost of 8.3 billion yen ($53 million). JAXA dubbed the spacecraft Hakuryu or “White Dragon” because of the spacecraft’s appearance. The 2,200-kilogram spacecraft, flying in a sun-synchronous orbit at an altitude of 393 kilometers, will collect data on clouds and aerosols in the atmosphere, along with imagery and measurements of reflected sunlight and radiated heat. That information will be used for atmospheric science, including climate and weather models. “EarthCARE is there to study the effect of clouds and aerosols on the thermal balance of the Earth,” said Dirk Bernaerts, ESA’s EarthCARE project manager, at a pre-launch briefing May 21. “It’s very important to observe them all together at the same location at the same time. That is what is unique about this spacecraft.” Other spacecraft make similar measurements, including NASA’s Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) spacecraft launched in February. “The observation techniques are different,” he said. “We observe the same thing but observe slightly different aspects of the clouds and aerosols.” He added that EarthCARE would use PACE data to help with calibration and validation of its observations. Development of EarthCARE took about two decades and resulted in cost growth that Cheli estimated at the pre-launch briefing to be 30%. Maximilian Sauer, EarthCARE project manager at prime contractor Airbus, said several factors contributed to the delays and overruns, including technical issues with the instruments as well as effects of the pandemic. One lesson learned from EarthCARE, Cheli said in the post-launch interview, was the need for “strict management” of the project, which she said suffered from challenges of coordinating work between agencies and companies. The mission also underscored the importance of strong support from member states as it worked to overcome problems, she added. Another factor in EarthCARE’s delay was a change in launch vehicles. EarthCARE was originally slated to go on a Soyuz rocket but ESA lost access to the vehicle after Russia’s invasion of Ukraine. The mission was first moved to Europe’s Vega C, but ESA decided last June to launch it instead on a Falcon 9, citing delaying in returning that rocket to flight as well as modifications to the rocket’s payload fairing that would have been needed to accommodate EarthCARE. Technically, the shift in launch vehicles was not a major problem for the mission. “Throughout the changes in the launchers we did not have to change the design of the spacecraft,” said Bernaerts. He said that, during environmental tests, engineers put the spacecraft through conditions simulating different launch vehicles to prepare for the potential of changing vehicles. “From the moment we knew that Soyuz was not available, we have been looking at how stringently we could test the spacecraft to envelope other candidate launchers. That’s what we did and that worked out in the end.” EarthCARE is the second ESA-led mission to launch on a Falcon 9, after the Euclid space telescope last July. Another Falcon 9 will launch ESA’s Hera asteroid mission this fall. “We had a good experience with Euclid last year,” said Josef Aschbacher, ESA director general, in a post-launch interview. “Our teams and the SpaceX teams are working together very well.” The use of the Falcon 9 is a stopgap until Ariane 6 enters services, with a first launch now scheduled for the first half of July, and Vega C returns to flight at the end of the year. “I hear lots of questions about why we’re launching with Falcon and not with Ariane, and it’s really good to see the Ariane 6 inaugural flight coming closer,” he said. Those involved with the mission were simply happy to finally get the spacecraft into orbit. “There is a feeling of relief and happiness,” Cheli said after the launch. “This is an emotional roller coaster,” said Thorsten Fehr, EarthCARE mission scientist at ESA, on the agency webcast of the launch shortly after payload separation. “This is one of the greatest moments in my professional life ever.”
  5. Maker Ilia Ovsiannikov is working on a friendly do-it-yourself robot kit — and as part of that work has released a library to make it easier to use a range of lidar sensors in your Arduino sketches. "I have combined support for various spinning lidar/LDS sensors into an Arduino LDS library with a single platform API [Application Programming Interface]," Ovsiannikov explains. "You can install this library from the Arduino Library Manager GUI. Why support many lidar/LDS sensors? The reason is to make the hardware — supported by [the] Kaia.ai platform — affordable to as many prospective users as possible. Some of the sensors [suported] are sold as used robot vacuum cleaner spare parts and cost as low as $16 or so (including shipping)." The library delivers support for a broad range of lidar sensors from a unified API, Ovsiannikov explains, meaning it's not only possible to get started quickly but to switch sensors mid-project — should existing sensors become unavailable, or pricing shift to favor a different model. It also adds a few neat features of its own, including pulse-width modulation (PWM) control of lidar motors that lack their own control system, using optional adapter boards. While the library is usable standalone, and even to perform real-time angle and distance computation directly on an Arduino microcontroller, Ovsiannikov has also published a companion package to tie it in to the Robot Operating System 2 (ROS2). "[The] Kaia.ai robot firmware forwards LDS raw data — obtained from the Arduino LDS library — to a PC running ROS2 and micro-ROS," he explains. "The ROS2 PC kaiaai_telemetry package receives the raw LDS data, decodes that data and publishes it to the ROS2 /scan topic." More information on the library is available in Ovsiannikov's blog post, while the library itself is available on GitHub under the permissive Apache 2.0 license. more information: https://kaia.ai/blog/arduino-lidar-library/ https://github.com/kaiaai/LDS
  6. has been discuss in their forum https://www.agisoft.com/forum/index.php?topic=2420.0
  7. Thank you very much Lurker for keeping this community well informed. Do you know if Agisoft is going to implement this solution or if they already have it? Sincerely, Neo.
  8. Traditionally, surveying relied heavily on manual measurements and ground-based techniques, which were time-consuming, labour-intensive and often limited in scope. With the somewhat recent emergence of photogrammetry, surveying professionals gained access to powerful tools that streamline processes, improve accuracy and unlock new possibilities in data collection and analysis. How videogrammetry works Videogrammetry is effectively an extension of photogrammetry; the mechanics are grounded in similar principles. Photogrammetry involves deriving accurate three-dimensional information from two-dimensional images by analysing their geometric properties and spatial relationships. The core of videogrammetry is to separate the video footage into images with regard to sufficient overlap and image quality. The subsequent workflow is almost the same as in photogrammetry. The quality of the output depends heavily on image resolution, frames per second (FPS) and stabilization. Integration with hardware and software Central to videogrammetry is the camera system used to capture video footage along with any supporting GPS devices used. Ground-based videogrammetry may utilize smartphone cameras or handheld digital cameras with video capabilities. While many smartphones and digital cameras have built-in GPS capabilities that geotag photographs with location data, their accuracy is at best 2.5m which is not always optimal for professional surveying. For this reason, when using a smartphone camera, many surveyors opt for an external real-time kinematic (RTK) antenna with 2cm accuracy. Attaching it to their smartphones enables them to receive correction data from a local NTRIP provider. This is a far more convenient option than placing and measuring ground control points (GCPs) on the terrain in combination with a GNSS device. Some of the well-known smartphone RTK products on the market today include REDCatch’s SmartphoneRTK, ArduSimple’s ZED-F9P RTK receiver, and Pix4D’s ViDoc RTK. It is important to note that while most smartphones can geotag photographs, not all can geotag video footage. Moreover, those smartphones that can geotag video can only geotag the first frame of the video. For this reason, users may require additional apps (e.g. PIX4Dcatch: 3D scanner) that can embed location data into the video’s metadata so that it can be used for surveying and mapping purposes. While non-geotagged videos can still be used for 3D model creation in various software solutions, it is recommended to opt for an RTK receiver as a minimum prerequisite for professional applications. At 3Dsurvey, utilization of Google’s Pixel 7a smartphone paired with REDcatch’s external RTK is the preferred option. Later this year, 3Dsurvey is set to release a ScanApp that it has developed to embed RTK correction data into video file metadata, enabling automatic georeferencing for videogrammetry projects. Examples of videogrammetry project approaches Non-geotagged (using any smartphone or camera) This can be ideal for the 3D documentation of cultural heritage projects. However, this approach lacks the spatial accuracy necessary for tasks such as outdoor mapping or infrastructure monitoring, which demand precise georeferencing. Accurately geotagged using external RTK (using a smartphone) Accurate geotagging using a smartphone equipped with an external RTK GNSS receiver ensures that the resulting 3D models maintain high spatial fidelity. Therefore, this approach is suitable for applications such as land surveying and small-scale construction monitoring projects where precise positioning is crucial. Examples include mapping manholes, pipes, low-level material piles, dig sites and cables for telecommunication or electricity. Within a larger photogrammetry/Lidar project For situations demanding the highest level of accuracy, videogrammetry can fill a gap or add another perspective to the aerial dataset obtained using other technologies, such as ground-level videogrammetry in combination with aerial Lidar (which lacks oblique views). Videogrammetry can also prove invaluable on site while drone mapping, such as when trees obstruct the flight path or if the project requires capturing details facing upwards. Similar to drone workflows, strategically placed and precisely measured GCPs can significantly improve the overall precision of the generated 3D model. Since videogrammetry usually involves capturing data from low angles, consider using AprilTag targets for superior oblique detection. Challenges and considerations Videogrammetry offers immense potential for various applications, yet its implementation comes with a set of challenges. Filming excessive footage can result in software inaccuracies, leading to duplicate surfaces in the 3D model. Therefore, it is important to carefully consider the path taken when filming. Some areas may be challenging to film, but if those areas are not captured in the video, they cannot be included when reconstructing the 3D model. Filming while navigating through obstacles, especially on construction sites, requires caution and precision on the part of the user. This sometimes gets in the way of creating a perfect video. Weather conditions such as puddles and sunlight can affect data accuracy by creating reflective surfaces and casting shadows, respectively. Filming areas obstructed by roofs, trees or walls can degrade the RTK signal, leading to inaccuracies in the final model. Tips for accurate data capture The quality of the output depends on a number of factors, including how the data is captured. The following basic principles can help users to obtain the necessary coordinates when filming so that the 3D model will be as realistic as possible: Move slowly and steadily: To obtain sharp images, maintain slow and smooth movements. This is especially crucial in poor light conditions when the shutter speed is low and video frames are more susceptible to blur. Rotate slowly and move while turning: Just like when towing a trailer, it is necessary to move back and forth rather than trying to turn on the spot. Don’t ‘wall paint’ when scanning vertical surfaces: Standing in one place while tilting the device up and down will generate a lot of images, but they will all have the same coordinates. Instead, move in a lateral direction while recording at different heights. Film in connected/closed loops: Try to ensure that the filming ends precisely back at the starting point. Advantages of videogrammetry Videogrammetry offers significant advantages in surveying, particularly when smartphones are leveraged as data capture devices. The portability and convenience of smartphones enables swift, efficient and accessible data collection in a wide range of situations, making it possible to document areas that are either small, rapidly changing, or require close-up details in hard-to-reach places. Moreover, unlike traditional methods requiring specialized equipment and expertise, smartphone videogrammetry empowers more professionals to capture and reconstruct 3D data. The standout feature of videogrammetry is that it eliminates overlap concerns, since the video is continuously shot at approximately 30FPS. This accessibility paves the way for even broader surveying applications. Integrating videogrammetry into a data collection toolkit promises to accelerate project timelines, streamline workflows and above all improve responsiveness, since a smartphone is always on site. This makes videogrammetry a cost-effective solution for surveying tasks. Videogrammetry in practice These three case studies illustrate the practical application of videogrammetry in various situations, ranging from a simple scenario to a mid-sized construction site. A Pixel smartphone with RTK and the 3Dsurvey ScanApp were used in all cases. 1. Pile volume calculation Photogrammetry is commonly used to calculate the volume of material piles, but the pre-flight preparation and planning can be time-consuming. Moreover, drones are bulky and less portable than a smartphone. Therefore, videogrammetry – with a handheld RTK antenna connected to a smart phone – can offer a much faster and simpler alternative. In this project to capture a small pile of material, the smartphone was simply held up high, tilted down and encircled the heap. During a five-minute site visit, a 75-second video was recorded, from which 152 frames were extracted. The processing time amounted to 30 minutes. 2. Preserving cultural heritage Archaeological excavation sites and statues that require 3D documentation are often located in crowded urban areas which may be subject to strict drone regulations. Some culturally significant items may be located indoors, such as in museums, where photogrammetry is challenging. Moreover, using ground surveying equipment like laser scanners requires highly technical knowledge. This can be an expensive option and therefore unsuitable for such projects. In a project to capture a complete and accurate 3D scan of a dragon statue, a total of 20 minutes were spent on site. 226 frames were extracted from 113 seconds of video. The subsequent processing time was one hour. 3. Underground infrastructure project Videogrammetry can be successfully used in the context of underground construction and engineering projects, such as when laying pipes into a trench. Compared with documenting the site with traditional equipment, using a smartphone equipped with RTK technology makes the process of documenting remarkably efficient. Just as with drone photogrammetry, multiple mappings can be performed to track progress. As another advantage, videogrammetry makes it possible to get really close and record details that may be hidden from the top-down aerial view. In support of a construction and engineering project for the installation of underground fuel tanks, videogrammetry was used to document and extract the exact measurements, monitor the width and the depth, calculate the volume and extract profile lines. A 15-minute site visit was sufficient to record 87 seconds of video, leading to 175 extracted frames. The processing time amounted to 45 minutes. Conclusion While it should be pointed out that it is not a replacement for established surveying techniques like photogrammetry and laser scans, videogrammetry has emerged as a valuable addition to the surveyor’s toolkit. Overall, this technology offers professionals significant gains in convenience, efficiency and flexibility, because surveyors can capture data using just a smartphone. This allows for faster and more accessible data capture across versatile situations, and provides cost-effective solutions adaptable to specific needs thanks to integrating seamlessly with existing surveying workflows. While filming with a smartphone currently has its limitations that present challenges, continuous advancements in hardware, software and best practices are steadily improving the accuracy and reliability of data collection. This ongoing evolution means that videogrammetry has the potential to contribute to better-informed decision-making and become an indispensable tool for the modern construction professional. source : gim-international
  9. Lurker

    Cluster polygons

    something like this? python - Polygon clustering - Geographic Information Systems Stack Exchange
  10. Pedro

    Cluster polygons

    How can i cluster polygons using python?
  11. Skyline的SGS有新的版本,版本中新增加了TEF的编辑功能,近期会有许可更新吗?

    skyline在中国国内的许可能否添加微信联系?微信号:10303124.谢谢

    SGS8.1下载链接:链接: https://pan.baidu.com/s/1HcqySeBkay312kQT_PT4Hw?pwd=re6s 提取码: re6s 
    --来自百度网盘超级会员v8的分享

    1. Show previous comments  2 more
    2. Lengkongqi

      Lengkongqi

      许可您一般在哪儿发放呢?😃

    3. tsingkong

      tsingkong

      加你qq了

    4. tsingkong

      tsingkong

      我只在这个论坛发放

  12. what version of qgis do you use right now?
  13. When tackling the classification of polygons by shape, seeking homework help from services like MyAssignmentHelp proves invaluable. They offer structured guidance on identifying polygon types based on sides and angles, ensuring clarity and accuracy in assignments. This assistance not only enhances understanding but also fosters confidence in handling geometric concepts effectively, benefiting students aiming for academic excellence.
  14. Hello everyone, I'm Brazilian and I'm translating via Google, sorry for any writing errors. Well, I would like to know alternatives to QGis' "ATLAS". I currently use Qgis and load 1 map on the left and another on the right. I also plot some information from some columns in the attribute table. However, when I have many images, assuming there are 1000 on the left side and 1000 on the right side, my Qgis becomes extremely slow. I also use my own .png images to style my reports. Note: all the reports I generate are formatted as ".pdf" I'm looking for libraries in Python, which will allow me to create a kind of atlas that I use in Qgis, could you help me please? Report layout example: -------------------------------------------------------------------------------------------------------------------------------- (my_logo.png) ///////////////////// ////////////////////////// (L MAP1 MAP2 O G 2020 RASTER.TIF 2024 RASTER.TIF O /////////////////////// ////////////////////////////// .pgn) {attribute table information field} {attribute table information field} {attribute table information field} {attribute table information field} ----------------------------------------------------------------------------------------------------------------------------
  15. Yes, the errors occur in every dataset I have downloaded for my area of interest. It affects a couple of adjacent areas as well so it is not a one off thing.
  16. is it the error occured in all the historical data you've check?
  17. Is anyone finding problems with the 30m gridded water level data. I am seeing large smears of incorrect or null data at right angles to the satellite track? The values look like some kind of instrument or processing failure and occur in more or less the same locations in every acquisition I have downloaded although the actual values do vary.. I haven't seen any mention of this problem in the literature so far.
  18. Thank you very much for keeping us informed about the advances of Artificial Intelligence....
  19. Generative AI and 'text to GIS' are coming to ArcGIS Pro. GenAI is coming to replace most of the small-scale and basic analysis tasks, probably within 2-3 years. Here is a video of ArcGIS ecosystem using GenAI Assistant. https://mediaspace.esri.com/media/t/1_opret32t https://highearthorbit.com/articles/announcing-ai-assistants-for-arcgis/ And here is an updated roadmap for ArcGIS Pro - https://community.esri.com/t5/arcgis-pro-documents/arcgis-pro-roadmap-may-2024/ta-p/1419528/redirect_from_archived_page/true
  20. my previous post address your first question actually 😁 I saw that you already got the the equation for the band value and the depth, so it should be okay you directly apply it to get depth value
  21. Can you please address my question ?) is there any previous steps before calculation of models?
  22. Researchers at the University of Science and Technology of China (USTC) have developed a compact and lightweight single-photon LiDAR system that can be deployed in the air to generate high-resolution three-dimensional images with a low-power laser. The technology could be used for terrain mapping, environmental monitoring, and object identification, according to a press release. LiDAR, which stands for Light Detection And Ranging, is extensively used to determine geospatial information. The system uses light emitted by pulse lasers and measures the time taken by the reflected light to be received to determine the range, creating digital twins of objects and examining the surface of the earth. A common application of the system has been to help autonomous driving systems or airborne drones determine their environments. However, this requires an extended setup of LIDAR sensors, which is power-intensive. To minimize such sensors’ energy consumption, USTC researchers devised a single-photon lidar system and tested it in an airborne configuration. The single-photon lidar The single-photon lidar system is made possible by detection systems that can measure the small amounts of light given out by the laser when it is reflected. The researchers had to shrink the entire LiDAR system to develop it. It works like a regular LiDAR system when sending light pulses toward its targets. To capture the small amounts of light reflected, the team used highly sensitive detectors called single-photon avalanche diode (SPAD) arrays, which can detect single photons. To reduce the overall system size, the team also used small telescopes with an optical aperture of 47 mm as receiving optics. The time-of-flight of the photons makes it possible to determine the distance to the ground, and advanced computer algorithms help generate detailed three-dimensional images of the terrain from the sensor. “A key part of the new system is the special scanning mirrors that perform continuous fine scanning, capturing sub-pixel information of the ground targets,” said Feihu Xu, a member of the research team at USTC. “Also, a new photon-efficient computational algorithm extracts this sub-pixel information from a small number of raw photon detections, enabling the reconstruction of super-resolution 3D images despite the challenges posed by weak signals and strong solar noise.” Testing in real-world scenario To validate the new system, the researchers conducted daytime tests onboard a small airplane in Yiwu City, Zhejiang Province. In pre-flight ground tests, the LiDAR demonstrated a resolution of nearly six inches (15 cm) from nearly a mile (1.5 km). The team then implemented sub-pixel scanning and 3D deconvolution and found the resolution improved to 2.3 inches (six cm) from the same distance. “We were able to incorporate recent technology developments into a system that, in comparison to other state-of-the-art airborne LiDAR systems, employs the lowest laser power and the smallest optical aperture while still maintaining good performance in detection range and imaging resolution,” added Xu. The team is now working to improve the system’s performance and integration so that a small satellite can be equipped with such tech in the future. “Ultimately, our work has the potential to enhance our understanding of the world around us and contribute to a more sustainable and informed future for all,” Xu said in the press release. “For example, our system could be deployed on drones or small satellites to monitor changes in forest landscapes, such as deforestation or other impacts on forest health. It could also be used after earthquakes to generate 3D terrain maps that could help assess the extent of damage and guide rescue teams, potentially saving lives.”
  23. comparison of shallow water depth algorithm https://ejournal2.undip.ac.id/index.php/jkt/article/view/16050
  24. Hello folks! I am trying to use the Lyzenga Algorithm for estimating the depth of water in shallower areas, probably depths under 8-10 meters of lakes. First of all, how accurate is this algorithm in practice? Secondly, lets say i have the band values. can someone explain me how to retrieve those depths? I am following the "Lyzenga Algorithm for Shallow Water Mapping Using Multispectral Sentinel-2 Imageries in Gili Noko Waters" paper, but there are 3 steps of getting NDWIs, NDCIs and after filtering the Sun Glint Correction, then it finally comes to the depth calculating. it end ups in a formula of following: 𝑧 = 28.32 ∗ 𝑋1 − 36.25 ∗ 𝑋2 + 9.42 ∗ 𝑋3 + 16.35 x1 , x2, x3 are RGB values respectively. What do you guys think? can i just apply this formula ? if not, what is the purpose of all the previous steps, and will they change the rgb values anyways?
  25. Understanding GIS Mapping GIS Mapping is a technology and process used to capture, store, analyze, manage, and visualize geographic or spatial data. It combines geographical information such as locations and terrain features, with various types of data like environmental, social, economic, and demographic information, to create detailed and layered maps. These maps are powerful tools for understanding and interpreting spatial relationships, patterns, and trends. Components of GIS Mapping Key components of GIS mapping include: 1. Hardware. The hardware is the tangible aspect of GIS mapping technology. This includes computers, GPS devices, drones, and other equipment used to collect, process, and analyze geographic data. 2. Software. GIS mapping provides a platform for creating maps, conducting spatial analyses, and sharing geographic information. 3. Data. Spatial data is the core of GIS mapping. It encompasses information about specific locations, attributes, and relationships. This data can come from various sources, such as satellite imagery, surveys, government databases, or user-generated content. 4. People. Skilled individuals, such as GIS analysts, cartographers, geographers, and geospatial scientists, are essential for using GIS technology effectively. They design, develop, and apply GIS solutions to address specific problems or research questions. GIS mapping allows users to perform a wide range of spatial analyses like measuring distances, determining optimal routes, assessing environmental changes, and identifying patterns within data. Therefore, it has a significant impact on humanitarian assistance and disaster preparedness and response. Now, what does this transformative impact look like? How GIS Mapping Transforms Humanitarian Assistance It Enhances Disaster Response When disasters strike (and they usually do), whether they take the form of a natural catastrophe or a man-made crisis, every second counts. Key decision-makers therefore need adequate data and spatial information to respond proactively. This is where GIS mapping technology shines. Real-time data on the location and extent of a disaster, along with intricate details about affected areas and population distribution, enable aid agencies to make well-informed decisions, coordinate efforts, and manage resources effectively. Crucially, the ability to visualize and analyze information on a map empowers responders to prioritize their actions based on the most pressing needs. This ultimately saves lives. GIS Technology Helps Map Vulnerable Populations In humanitarian work, the overarching goal is to help those who are most in need. Humanitarian assistance, therefore, relies heavily on the ability to identify and map ‘vulnerable’ populations. This is where GIS technologies play a crucial role. GIS mapping provides a powerful tool for identifying vulnerable populations, whether they are refugees fleeing conflict, communities at risk from disease outbreaks, or marginalized groups living in impoverished regions. Therefore, by overlaying geographic data with information on poverty rates, access to healthcare and food security, aid workers can make informed decisions about where and how to allocate resources effectively. This targeted approach ensures that aid reaches the individuals and communities that require it the most. GIS Mapping Provides Real-time Data One of the most remarkable features of GIS mapping in humanitarian aid is its ability to provide real-time data. This is usually in the form of satellite imagery. This capability is particularly crucial in disaster management, where timely and accurate information is of paramount importance. For example, during a hurricane, GIS technology can track the storm’s path, predict areas likely to be impacted and facilitate evacuation planning. It can also assess damage immediately after the event, thereby allowing for a rapid and well-coordinated response. This ‘bird’s eye view’ of disaster-affected areas equips humanitarian workers with the data needed to make informed decisions and deploy resources efficiently. Additionally, with real-time data, there’s flexibility in managing situations on the go. GIS Mapping Helps Track and Monitor Epidemics and Disease Outbreaks GIS mapping plays a pivotal role in monitoring and controlling disease outbreaks. During epidemics such as the Ebola crisis in West Africa, GIS technology tracked the spread of the disease, identified hotspots of infection and helped health workers isolate cases and trace contacts. These insights were crucial in containment efforts and ultimately contributed to the control of the epidemic. By visualizing the geographic spread of the disease, humanitarian organizations could direct resources to the areas that needed them most, effectively limiting the outbreak’s reach. Enroll in: GIS in Monitoring and Evaluation Course It Enhances Disaster Risk Reduction and Management In the field of disaster management, preparedness is often the best form of defense. GIS mapping aids in identifying disaster-prone regions, allowing communities to plan for potential crises. By creating detailed hazard maps, which include flood risk assessments, earthquake-prone areas, and other environmental hazards, this technology helps in developing preparedness plans and mitigating the impact of disasters. The ability to visualize potential risks empowers communities to take proactive measures, such as reinforcing infrastructure, developing evacuation plans, and building resilient shelters. Enroll in: GIS For WASH Programmes Course Crowdsourced Mapping Crowdsourced mapping has proven to be a remarkable revelation to humanitarian aid. It’s a collaborative approach to creating and updating maps and geographic information using contributions from the general public. This method relies on the collective efforts of volunteers who provide geographic data, typically using digital tools. Initiatives like OpenStreetMap have harnessed these efforts to contribute data on roads, buildings, and infrastructure in disaster-affected areas. This grassroots approach has been instrumental in improving the accuracy and completeness of maps in areas that were previously unmapped. Crucially, humanitarian organizations can then use this data for response efforts, making it a remarkable example of how technology and global collaboration can save lives. Therefore, this collective action not only aids in immediate response but also contributes to the resilience of affected communities. Click HERE to read more.
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist