Jump to content

Lurker

Moderators
  • Posts

    5,545
  • Joined

  • Last visited

  • Days Won

    860

Lurker last won the day on June 11

Lurker had the most liked content!

About Lurker

  • Birthday 02/13/1983

Profile Information

  • Gender
    Male
  • Location
    INDONESIA
  • Interests
    GIS and Remote Sensing

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Lurker's Achievements

  1. is it the error occured in all the historical data you've check?
  2. my previous post address your first question actually 😁 I saw that you already got the the equation for the band value and the depth, so it should be okay you directly apply it to get depth value
  3. Researchers at the University of Science and Technology of China (USTC) have developed a compact and lightweight single-photon LiDAR system that can be deployed in the air to generate high-resolution three-dimensional images with a low-power laser. The technology could be used for terrain mapping, environmental monitoring, and object identification, according to a press release. LiDAR, which stands for Light Detection And Ranging, is extensively used to determine geospatial information. The system uses light emitted by pulse lasers and measures the time taken by the reflected light to be received to determine the range, creating digital twins of objects and examining the surface of the earth. A common application of the system has been to help autonomous driving systems or airborne drones determine their environments. However, this requires an extended setup of LIDAR sensors, which is power-intensive. To minimize such sensors’ energy consumption, USTC researchers devised a single-photon lidar system and tested it in an airborne configuration. The single-photon lidar The single-photon lidar system is made possible by detection systems that can measure the small amounts of light given out by the laser when it is reflected. The researchers had to shrink the entire LiDAR system to develop it. It works like a regular LiDAR system when sending light pulses toward its targets. To capture the small amounts of light reflected, the team used highly sensitive detectors called single-photon avalanche diode (SPAD) arrays, which can detect single photons. To reduce the overall system size, the team also used small telescopes with an optical aperture of 47 mm as receiving optics. The time-of-flight of the photons makes it possible to determine the distance to the ground, and advanced computer algorithms help generate detailed three-dimensional images of the terrain from the sensor. “A key part of the new system is the special scanning mirrors that perform continuous fine scanning, capturing sub-pixel information of the ground targets,” said Feihu Xu, a member of the research team at USTC. “Also, a new photon-efficient computational algorithm extracts this sub-pixel information from a small number of raw photon detections, enabling the reconstruction of super-resolution 3D images despite the challenges posed by weak signals and strong solar noise.” Testing in real-world scenario To validate the new system, the researchers conducted daytime tests onboard a small airplane in Yiwu City, Zhejiang Province. In pre-flight ground tests, the LiDAR demonstrated a resolution of nearly six inches (15 cm) from nearly a mile (1.5 km). The team then implemented sub-pixel scanning and 3D deconvolution and found the resolution improved to 2.3 inches (six cm) from the same distance. “We were able to incorporate recent technology developments into a system that, in comparison to other state-of-the-art airborne LiDAR systems, employs the lowest laser power and the smallest optical aperture while still maintaining good performance in detection range and imaging resolution,” added Xu. The team is now working to improve the system’s performance and integration so that a small satellite can be equipped with such tech in the future. “Ultimately, our work has the potential to enhance our understanding of the world around us and contribute to a more sustainable and informed future for all,” Xu said in the press release. “For example, our system could be deployed on drones or small satellites to monitor changes in forest landscapes, such as deforestation or other impacts on forest health. It could also be used after earthquakes to generate 3D terrain maps that could help assess the extent of damage and guide rescue teams, potentially saving lives.”
  4. comparison of shallow water depth algorithm https://ejournal2.undip.ac.id/index.php/jkt/article/view/16050
  5. Multimodal machine learning models have been surging in popularity, marking a significant evolution in artificial intelligence (AI) research and development. These models, capable of processing and integrating data from multiple modalities such as text, images, and audio, are of great importance due to their ability to tackle complex real-world problems that traditional unimodal models struggle with. The fusion of diverse data types enables these models to extract richer insights, enhance decision-making processes, and ultimately drive innovation. Among the burgeoning applications of multimodal machine learning, Visual Question Answering (VQA) models have emerged as particularly noteworthy. VQA models possess the capability to comprehend both images and accompanying textual queries, providing answers or relevant information based on the content of the visual input. This capability opens up avenues for interactive systems, enabling users to engage with AI in a more intuitive and natural manner. However, despite their immense potential, the deployment of VQA models, especially in critical scenarios such as disaster recovery efforts, presents unique challenges. In situations where internet connectivity is unreliable or unavailable, deploying these models on tiny hardware platforms becomes essential. Yet the deep neural networks that power VQA models demand substantial computational resources, rendering traditional edge computing hardware solutions impractical. Inspired by optimizations that have enabled powerful unimodal models to run on tinyML hardware, a team led by researchers at the University of Maryland has developed a novel multimodal model called TinyVQA that allows extremely resource-limited hardware to run VQA models. Using some clever techniques, the researchers were able to compress the model to the point that it could run inferences in a few tens of milliseconds on a common low-power processor found onboard a drone. In spite of this substantial compression, the model was able to maintain acceptable levels of accuracy. To achieve this goal, the team first created a deep learning VQA model that is similar to other state of the art algorithms that have been previously described. This model was far too large to use for tinyML applications, but it contained a wealth of knowledge. Accordingly, the model was used as a teacher for a smaller student model. This practice, called knowledge distillation, captures much of the important associations found in the teacher model, and encodes them in a more compact form in the student model. In addition to having fewer layers and fewer parameters, the student model also made use of 8-bit quantization. This reduces both the memory footprint and the amount of computational resources that are required when running inferences. Another optimization involved swapping regular convolution layers out in favor of depthwise separable convolution layers — this further reduced model size while having a minimal impact on accuracy. Having designed and trained TinyVQA, the researchers evaluated it by using the FloodNet-VQA dataset. This dataset contains thousands of images of flooded areas captured by a drone after a major storm. Questions were asked about the images to determine how well the model understood the scenes. The teacher model, which weighs in at 479 megabytes, was found to have an accuracy of 81 percent. The much smaller TinyVQA model, only 339 kilobytes in size, achieved a very impressive 79.5 percent accuracy. Despite being over 1,000 times smaller, TinyVQA only lost 1.5 percent accuracy on average — not a bad trade-off at all! In a practical trial of the system, the model was deployed on the GAP8 microprocessor onboard a Crazyflie 2.0 drone. With inference times averaging 56 milliseconds on this platform, it was demonstrated that TinyVQA could realistically be used to assist first responders in emergency situations. And of course, many other opportunities to build autonomous, intelligent systems could also be enabled by this technology. source: hackster.io
  6. A new machine learning system can create height maps of urban environments from a single synthetic aperture radar (SAR) image, potentially accelerating disaster planning and response. Aerospace engineers at the University of the Bundeswehr in Munich claim their SAR2Height framework is the first to provide complete—if not perfect—three-dimensional city maps from a single SAR satellite. When an earthquake devastates a city, information can be in short supply. With basic services disrupted, it can difficult to assess how much damage occurred or where the need for humanitarian aid is greatest. Aerial surveys using laser ranging lidar systems provide the gold standard for 3D mapping, but such systems are expensive to buy and operate, even without the added logistical difficulties of a major disaster. Remote sensing is another option but optical satellite images are next to useless if the area is obscured by clouds or smoke. Synthetic aperture radar, on the other hand, works day or night, whatever the weather. SAR is an active sensor that uses the reflections of signals beamed from a satellite towards the Earth’s surface—the “synthetic aperture” part comes from the radar using the satellite’s own motion to mimic a larger antenna, to capture reflected signals with relatively long wavelengths. There are dozens of governmental and commercial SAR satellites orbiting the planet, and many can be tasked to image new locations in a matter of hours. However, SAR imagery is still inherently two-dimensional, and can be even trickier to interpret than photographs. This is partly due to an effect called radar layover where undamaged buildings appear to be toppling towards the sensor. “Height is a super complex topic in itself,” says Michael Schmitt, a professor at the University of the Bundeswehr. “There are a million definitions of what height is, and turning a satellite image into a meaningful height in a meaningful world geometry is a very complicated endeavor.” Schmitt and his colleague Michael Reclastarted by sourcing SAR images for 51 cities from the TerraSAR-X satellite, a partnership between the public German Aerospace Center and the private contractor Airbus Defence and Space. The researchers then obtained high quality height maps for the same cities, mostly generated by lidar surveys but some by planes or drones carrying stereo cameras. The next step was to make a one-to-one, pixel-to-pixel mapping between the height maps and the SAR images on which they could train a deep neural network. The results were amazing, says Schmitt. “We trained our model purely on TerraSAR-X imagery but out of the box, it works quite well on imagery from other commercial satellites.” He says the model, which takes only minutes to run, can predict the height of buildings in SAR images with an accuracy of around three meters—the height of a single story in a typical building. That means the system should be able to spot almost every building across a city that has suffered significant damage. Pietro Milillo, a professor of geosensing systems engineering at the University of Houston, hopes to use Schmitt and Recla’s model in an ongoing NASA-funded project on earthquake recovery. “We can go from a map of building heights to a map of probability of collapse of buildings,” he says. Later this month, Milillo intends to validate his application by visiting the site of an earthquake in Morocco last year that killed over 2,900 people. But the AI model is still far from perfect, warns Schmitt. It struggles to accurately predict the height of skyscrapers and is biased towards North American and European cities. This is because many cities in developing nations did not have regular lidar mapping flights to provide representational training data. The longer the gap between the lidar flight and the SAR images, the more buildings would have been built or replaced, and the less reliable the model’s predictions. Even in richer countries, “we’re really dependent on the slow revisit cycles of governments flying lidar missions and making the data publicly available,” says Carl Pucci, founder of EO59, a Virginia Beach, Va.-based company specializing in SAR software. “It just sucks. Being able to produce 3D from SAR alone would be really be a revolution.” Schmitt says the SAR2Height model now incorporates data from 177 cities and is getting better all time. “We are very close to reconstructing actual building models from single SAR images,” he says. “But you have to keep in mind that our method will never be as accurate as classic stereo or lidar. It will always remain a form of best guess instead of high-precision measurement.” source: ieee
  7. Satellite images analyzed by AI are emerging as a new tool in finding unmapped roads that bring environmental destruction to wilderness areas. James Cook University's Distinguished Professor Bill Laurance was co-author of a study analyzing the reliability of an automated approach to large-scale road mapping, using convolutional neural networks trained on road data, using satellite images. He said the Earth is experiencing an unprecedented wave of road building, with some 25 million kilometers of new paved roads expected by mid-century. "Roughly 90% of all road construction is occurring in developing nations including many tropical and subtropical regions of exceptional biodiversity. "By sharply increasing access to formerly remote natural areas, poorly regulated road development triggers dramatic increases in environmental disruption due to activities such as logging, mining and land clearing," said Professor Laurance. He said many roads in such regions, both legal and illegal, are unmapped, with road-mapping studies in the Brazilian Amazon, Asia-Pacific and elsewhere regularly finding up to 13 times more road length than reported in government or road databases. "Traditionally, road mapping meant tracing road features by hand, using satellite imagery. This is incredibly slow, making it almost impossible to stay on top of the global road tsunami," said Professor Laurance. The researchers trained three machine-learning models to automatically map road features from high-resolution satellite imagery covering rural, generally remote and often forested areas of Papua New Guinea, Indonesia and Malaysia. "This study shows the remarkable potential of AI for large-scale tasks like global road-mapping. We're not there yet, but we're making good progress," said Professor Laurance. "Proliferating roads are probably the most important direct threat to tropical forests globally. In a few more years, AI might give us the means to map and monitor roads across the world's most environmentally critical areas." journal: https://www.mdpi.com/2072-4292/16/5/839
  8. The European Space Agency (ESA) has greenlit the development of the NanoMagSat constellation, marking a significant advancement in the use of small satellites for scientific missions. NanoMagSat, a flagship mission spearheaded by Open Cosmos together with IPGP (Université Paris Cité, Institut de physique du globe de Paris, CNRS) and CEA-Léti, aims to revolutionise our understanding of Earth's magnetic field and ionospheric environment. As a follow on from ESA's successful Earth Explorer Swarm mission, NanoMagSat will use a constellation of three 16U satellites equipped with state-of-the-art instruments to monitor magnetic fields and ionospheric phenomena. This mission is joining the Scout family, a programme from ESA to deliver scientific small satellite missions within a budget of less than €35 million. The decision to proceed with NanoMagSat follows the successful completion of Risk Retirement Activities including the development of a 3m-long deployable boom and a satellite platform with exceptional magnetic cleanliness, key to ensuring state-of-the art magnetic accuracy. ESA’s Director of Earth Observation Programmes, Simonetta Cheli, said of this news: “We are very pleased to add two new Scouts to our Earth observation mission portfolio. These small science missions perfectly complement our more traditional existing and future Earth Explorer missions, and will bring exciting benefits to Earth.”
  9. Leica Geosystems, part of Hexagon, introduces the Leica TerrainMapper-3 airborne LiDAR sensor, featuring new scan pattern configurability to support the widest variety of applications and requirements in a single system. Building upon Leica Geosystems’ legacy of LiDAR efficiency, the TerrainMapper-3 provides three scan patterns for superior productivity and to customise the sensor’s performance to specific applications. Circle scan patterns enhance 3D modelling of urban areas or steep terrains, while ellipse scan patterns optimise data capture for more traditional mapping applications. Skew ellipse scan patterns improve point density for infrastructures and corridor mapping applications. The sensor’s higher scan speed rate allows customers to fly the aircraft faster while maintaining the highest data quality, and the 60-degrees adjustable field of view maximises data collection with fewer flight lines. The TerrainMapper-3 is further complemented by the Leica MFC150 4-band camera, operating with the same 60-degree field of view coverage as the LiDAR for exact data consistency. Thanks to reduced beam divergence, the TerrainMapper-3 provides improved planimetric accuracy, while new MPiA (Multiple Pulses in Air) handling guarantees more consistent data acquisition, even in steep terrain, providing users with unparalleled reliability and precision. The new system introduces possibilities for real-time full waveform recording at maximum pulse rate, opening new opportunities for advanced and automated point classification. The TerrainMapper-3 seamlessly integrates with Leica HxMap end-to-end processing workflow, supporting users from mission planning to product generation to extract the greatest value from the collected data.
  10. The 9.0 release add several new features, including a Google Maps source (finally!), improved WebGL line rendering, and a new symbol and text decluttering implementation. We also improved and broadened flat styles support for both WebGL and Canvas 2D renderers. For better developer experience, we made more types generic and fixed some issues with types. Backwards incompatible changes Improved render order of decluttered items Decluttered items in Vector and VectorTile layers now maintain the render order of the layers and within a layer. They do not get lifted to a higher place in the stack any more. For most use cases, this is the desired behavior. If, however, you've been relying on the previous behavior, you now have to create separate layers above the layer stack, with just the styles for the declutter items. Removal of Map#flushDeclutterItems() It is no longer necessary to call this function to put layers above decluttered symbols and text, because decluttering no longer lifts elements above the layer stack. To upgrade, simply remove the code where you use the flushDeclutterItems() method. Changes in ol/style Removed the ol/style/RegularShape's radius1 property. Use radius for regular polygons or radius and radius2 for stars. Removed the shape-radius1 property from ol/style/flat~FlatShape. Use shape-radius instead. GeometryCollection constructor ol/geom/GeometryCollection can no longer be created without providing a Geometry array. Empty arrays are still valid. ol/interaction/Draw The finishDrawing() method now returns the drawn feature or null if no drawing could be finished. Previously it returned undefined. page: https://github.com/openlayers/openlayers/releases/tag/v9.0.0
  11. The Association for Geographic Information (AGI) and the Government Geography Profession (GGP) have agreed to work together to combine their experience, expertise and outreach to further the impact of geospatial data and technology within the public sector. By working together, they will help grow the geospatial community, and will build on recent activities such as the AGI’s Skills Roundtable. “The UK is at the forefront of geospatial. Now more than ever, geographers are combining increasing quantities of geospatial information with advances in technology, such as AI and ML, to drive new insights on our place in the world,” commented David Wood, Head of the Government Geography Profession. “The profession is leading the way in government and the public sector, recognising and encouraging the use of geography and geographical sciences within and across government. By working with the AGI, we can increase awareness and therefore engagement with geographers across government and align our ambitions and activities with the wider geospatial community.” “Many of government’s greatest challenges are time and place related and therefore the data and technology that will help address and resolve them must also have location at its heart,” added Adam Burke, Past Chair of the Association for Geographic Information. “By partnering with GGP, we can help ensure the geospatial ecosystem continues to grow sustainably, both within government and beyond, and is utilised across diverse industry sectors and across multiple applications to impact positive outputs.” AGI is the UK’s geospatial membership organisation; leading, connecting and developing a community of members who use and benefit from geographic information. An independent and impartial organisation, the AGI works with members and the wider community alongside government policy makers, delivers professional development and provides a lead for best practice across the industry. Its mission is to nurture, create and support a thriving community, actively supporting a sustainable future, and it aims to achieve this by nurturing and connecting active GI communities, supporting career and skills development and providing thought leadership to inspire future generations. The GGP, established in 2018, is made up of around 1,500 professional geographers in roles across the public sector. The profession is working ‘to create and grow a high-profile, proud and effective geography profession that attracts fresh talent and has a secure place at the heart of decision making’. This is being achieved by creating the environment for geographers to have maximum impact, professionalising and progressing the use applications of geography and growing a diverse and inclusive community within government and the wider public sector. page: https://www.directionsmag.com/pressrelease/12860
  12. Copernicus Open Access Hub is closing at the end of October 2023. Copernicus Sentinel data are now fully available in the Copernicus Data Space Ecosystem As previously announced in January the Copernicus Open Access Hub service continued its full operations until the end of June 2023, followed up by a gradual ramp-down phase until September 2023. The Copernicus Open Access Hub will be exceptionally extended for another month and will cease operations at the end of October 2023. To continue accessing Copernicus Sentinel data users will need to self-register on the new Copernicus Data Space Ecosystem. A guide for migration is available here. The new service offers access to a wide range of Earth observation data and services as well as new tools, GUI and APIs to help users explore and analyse satellite imagery. Discover more about the Copernicus Data Space Ecosystem at https://dataspace.copernicus.eu . A system of platforms to access EO data The Copernicus Data Space Ecosystem will be the main distribution platform for data from the EU Copernicus missions. Instant access to full and always up-to-date Earth observation data archives is supported by a new, more intuitive browser interface, the Copernicus Browser. Since 2015, the Copernicus Open Access Hub supports direct download of Sentinel satellite data for a wide range of operational applications by hundreds of thousands of users through the last decade. However, technology has moved on and the Copernicus Data Space Ecosystem was recently launched as a new system of platforms for accessing Sentinel data. As part of this process, the current access point will be gradually wound down from July 2023 and will no longer operate from end of October 2023. This post demonstrates how to migrate your workflow from accessing the data through the Copernicus Open Access Hub to using APIs via the Copernicus Data Space Ecosystem. In this post, we will show you how to: setup your credentials use OData to search the Catalog and download Sentinel-2 L2A Granules in .SAFE Format. search, discover and download gridded Sentinel-2 L2A data using the Process API Increase in data quality, quantity and accessibility With the glut of free and open data in recent years, the increases in revisit times and higher spatial and temporal resolutions, applications using earth observation data have blossomed. For example, before 2013, you would likely have used Landsat 8 data for land cover mapping with a revisit time of 16 days at 30m spatial resolution. In 2023 though, we now have access to Sentinel-2 with a revisit time of 3-5 days at 10m resolution enabling you not just to map land cover but monitor changes at higher spatial and temporal resolutions. Therefore, while it was feasible to download, process and analyse individual acquisitions in the past, this approach is no longer effective today and it makes more sense to process data in the cloud. This is where the new APIs provided by the Copernicus Data Space Ecosystem come in. official page: https://dataspace.copernicus.eu/
  13. this is indonesian language sub forum, and you may notice this topic is 5 years old
  14. any rough guess based on your experience?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist