Jump to content

Lurker

Moderators
  • Posts

    5,522
  • Joined

  • Last visited

  • Days Won

    848

Lurker last won the day on April 21

Lurker had the most liked content!

About Lurker

  • Birthday 02/13/1983

Profile Information

  • Gender
    Male
  • Location
    INDONESIA
  • Interests
    GIS and Remote Sensing

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Lurker's Achievements

  1. Multimodal machine learning models have been surging in popularity, marking a significant evolution in artificial intelligence (AI) research and development. These models, capable of processing and integrating data from multiple modalities such as text, images, and audio, are of great importance due to their ability to tackle complex real-world problems that traditional unimodal models struggle with. The fusion of diverse data types enables these models to extract richer insights, enhance decision-making processes, and ultimately drive innovation. Among the burgeoning applications of multimodal machine learning, Visual Question Answering (VQA) models have emerged as particularly noteworthy. VQA models possess the capability to comprehend both images and accompanying textual queries, providing answers or relevant information based on the content of the visual input. This capability opens up avenues for interactive systems, enabling users to engage with AI in a more intuitive and natural manner. However, despite their immense potential, the deployment of VQA models, especially in critical scenarios such as disaster recovery efforts, presents unique challenges. In situations where internet connectivity is unreliable or unavailable, deploying these models on tiny hardware platforms becomes essential. Yet the deep neural networks that power VQA models demand substantial computational resources, rendering traditional edge computing hardware solutions impractical. Inspired by optimizations that have enabled powerful unimodal models to run on tinyML hardware, a team led by researchers at the University of Maryland has developed a novel multimodal model called TinyVQA that allows extremely resource-limited hardware to run VQA models. Using some clever techniques, the researchers were able to compress the model to the point that it could run inferences in a few tens of milliseconds on a common low-power processor found onboard a drone. In spite of this substantial compression, the model was able to maintain acceptable levels of accuracy. To achieve this goal, the team first created a deep learning VQA model that is similar to other state of the art algorithms that have been previously described. This model was far too large to use for tinyML applications, but it contained a wealth of knowledge. Accordingly, the model was used as a teacher for a smaller student model. This practice, called knowledge distillation, captures much of the important associations found in the teacher model, and encodes them in a more compact form in the student model. In addition to having fewer layers and fewer parameters, the student model also made use of 8-bit quantization. This reduces both the memory footprint and the amount of computational resources that are required when running inferences. Another optimization involved swapping regular convolution layers out in favor of depthwise separable convolution layers — this further reduced model size while having a minimal impact on accuracy. Having designed and trained TinyVQA, the researchers evaluated it by using the FloodNet-VQA dataset. This dataset contains thousands of images of flooded areas captured by a drone after a major storm. Questions were asked about the images to determine how well the model understood the scenes. The teacher model, which weighs in at 479 megabytes, was found to have an accuracy of 81 percent. The much smaller TinyVQA model, only 339 kilobytes in size, achieved a very impressive 79.5 percent accuracy. Despite being over 1,000 times smaller, TinyVQA only lost 1.5 percent accuracy on average — not a bad trade-off at all! In a practical trial of the system, the model was deployed on the GAP8 microprocessor onboard a Crazyflie 2.0 drone. With inference times averaging 56 milliseconds on this platform, it was demonstrated that TinyVQA could realistically be used to assist first responders in emergency situations. And of course, many other opportunities to build autonomous, intelligent systems could also be enabled by this technology. source: hackster.io
  2. A new machine learning system can create height maps of urban environments from a single synthetic aperture radar (SAR) image, potentially accelerating disaster planning and response. Aerospace engineers at the University of the Bundeswehr in Munich claim their SAR2Height framework is the first to provide complete—if not perfect—three-dimensional city maps from a single SAR satellite. When an earthquake devastates a city, information can be in short supply. With basic services disrupted, it can difficult to assess how much damage occurred or where the need for humanitarian aid is greatest. Aerial surveys using laser ranging lidar systems provide the gold standard for 3D mapping, but such systems are expensive to buy and operate, even without the added logistical difficulties of a major disaster. Remote sensing is another option but optical satellite images are next to useless if the area is obscured by clouds or smoke. Synthetic aperture radar, on the other hand, works day or night, whatever the weather. SAR is an active sensor that uses the reflections of signals beamed from a satellite towards the Earth’s surface—the “synthetic aperture” part comes from the radar using the satellite’s own motion to mimic a larger antenna, to capture reflected signals with relatively long wavelengths. There are dozens of governmental and commercial SAR satellites orbiting the planet, and many can be tasked to image new locations in a matter of hours. However, SAR imagery is still inherently two-dimensional, and can be even trickier to interpret than photographs. This is partly due to an effect called radar layover where undamaged buildings appear to be toppling towards the sensor. “Height is a super complex topic in itself,” says Michael Schmitt, a professor at the University of the Bundeswehr. “There are a million definitions of what height is, and turning a satellite image into a meaningful height in a meaningful world geometry is a very complicated endeavor.” Schmitt and his colleague Michael Reclastarted by sourcing SAR images for 51 cities from the TerraSAR-X satellite, a partnership between the public German Aerospace Center and the private contractor Airbus Defence and Space. The researchers then obtained high quality height maps for the same cities, mostly generated by lidar surveys but some by planes or drones carrying stereo cameras. The next step was to make a one-to-one, pixel-to-pixel mapping between the height maps and the SAR images on which they could train a deep neural network. The results were amazing, says Schmitt. “We trained our model purely on TerraSAR-X imagery but out of the box, it works quite well on imagery from other commercial satellites.” He says the model, which takes only minutes to run, can predict the height of buildings in SAR images with an accuracy of around three meters—the height of a single story in a typical building. That means the system should be able to spot almost every building across a city that has suffered significant damage. Pietro Milillo, a professor of geosensing systems engineering at the University of Houston, hopes to use Schmitt and Recla’s model in an ongoing NASA-funded project on earthquake recovery. “We can go from a map of building heights to a map of probability of collapse of buildings,” he says. Later this month, Milillo intends to validate his application by visiting the site of an earthquake in Morocco last year that killed over 2,900 people. But the AI model is still far from perfect, warns Schmitt. It struggles to accurately predict the height of skyscrapers and is biased towards North American and European cities. This is because many cities in developing nations did not have regular lidar mapping flights to provide representational training data. The longer the gap between the lidar flight and the SAR images, the more buildings would have been built or replaced, and the less reliable the model’s predictions. Even in richer countries, “we’re really dependent on the slow revisit cycles of governments flying lidar missions and making the data publicly available,” says Carl Pucci, founder of EO59, a Virginia Beach, Va.-based company specializing in SAR software. “It just sucks. Being able to produce 3D from SAR alone would be really be a revolution.” Schmitt says the SAR2Height model now incorporates data from 177 cities and is getting better all time. “We are very close to reconstructing actual building models from single SAR images,” he says. “But you have to keep in mind that our method will never be as accurate as classic stereo or lidar. It will always remain a form of best guess instead of high-precision measurement.” source: ieee
  3. Satellite images analyzed by AI are emerging as a new tool in finding unmapped roads that bring environmental destruction to wilderness areas. James Cook University's Distinguished Professor Bill Laurance was co-author of a study analyzing the reliability of an automated approach to large-scale road mapping, using convolutional neural networks trained on road data, using satellite images. He said the Earth is experiencing an unprecedented wave of road building, with some 25 million kilometers of new paved roads expected by mid-century. "Roughly 90% of all road construction is occurring in developing nations including many tropical and subtropical regions of exceptional biodiversity. "By sharply increasing access to formerly remote natural areas, poorly regulated road development triggers dramatic increases in environmental disruption due to activities such as logging, mining and land clearing," said Professor Laurance. He said many roads in such regions, both legal and illegal, are unmapped, with road-mapping studies in the Brazilian Amazon, Asia-Pacific and elsewhere regularly finding up to 13 times more road length than reported in government or road databases. "Traditionally, road mapping meant tracing road features by hand, using satellite imagery. This is incredibly slow, making it almost impossible to stay on top of the global road tsunami," said Professor Laurance. The researchers trained three machine-learning models to automatically map road features from high-resolution satellite imagery covering rural, generally remote and often forested areas of Papua New Guinea, Indonesia and Malaysia. "This study shows the remarkable potential of AI for large-scale tasks like global road-mapping. We're not there yet, but we're making good progress," said Professor Laurance. "Proliferating roads are probably the most important direct threat to tropical forests globally. In a few more years, AI might give us the means to map and monitor roads across the world's most environmentally critical areas." journal: https://www.mdpi.com/2072-4292/16/5/839
  4. The European Space Agency (ESA) has greenlit the development of the NanoMagSat constellation, marking a significant advancement in the use of small satellites for scientific missions. NanoMagSat, a flagship mission spearheaded by Open Cosmos together with IPGP (Université Paris Cité, Institut de physique du globe de Paris, CNRS) and CEA-Léti, aims to revolutionise our understanding of Earth's magnetic field and ionospheric environment. As a follow on from ESA's successful Earth Explorer Swarm mission, NanoMagSat will use a constellation of three 16U satellites equipped with state-of-the-art instruments to monitor magnetic fields and ionospheric phenomena. This mission is joining the Scout family, a programme from ESA to deliver scientific small satellite missions within a budget of less than €35 million. The decision to proceed with NanoMagSat follows the successful completion of Risk Retirement Activities including the development of a 3m-long deployable boom and a satellite platform with exceptional magnetic cleanliness, key to ensuring state-of-the art magnetic accuracy. ESA’s Director of Earth Observation Programmes, Simonetta Cheli, said of this news: “We are very pleased to add two new Scouts to our Earth observation mission portfolio. These small science missions perfectly complement our more traditional existing and future Earth Explorer missions, and will bring exciting benefits to Earth.”
  5. Leica Geosystems, part of Hexagon, introduces the Leica TerrainMapper-3 airborne LiDAR sensor, featuring new scan pattern configurability to support the widest variety of applications and requirements in a single system. Building upon Leica Geosystems’ legacy of LiDAR efficiency, the TerrainMapper-3 provides three scan patterns for superior productivity and to customise the sensor’s performance to specific applications. Circle scan patterns enhance 3D modelling of urban areas or steep terrains, while ellipse scan patterns optimise data capture for more traditional mapping applications. Skew ellipse scan patterns improve point density for infrastructures and corridor mapping applications. The sensor’s higher scan speed rate allows customers to fly the aircraft faster while maintaining the highest data quality, and the 60-degrees adjustable field of view maximises data collection with fewer flight lines. The TerrainMapper-3 is further complemented by the Leica MFC150 4-band camera, operating with the same 60-degree field of view coverage as the LiDAR for exact data consistency. Thanks to reduced beam divergence, the TerrainMapper-3 provides improved planimetric accuracy, while new MPiA (Multiple Pulses in Air) handling guarantees more consistent data acquisition, even in steep terrain, providing users with unparalleled reliability and precision. The new system introduces possibilities for real-time full waveform recording at maximum pulse rate, opening new opportunities for advanced and automated point classification. The TerrainMapper-3 seamlessly integrates with Leica HxMap end-to-end processing workflow, supporting users from mission planning to product generation to extract the greatest value from the collected data.
  6. The 9.0 release add several new features, including a Google Maps source (finally!), improved WebGL line rendering, and a new symbol and text decluttering implementation. We also improved and broadened flat styles support for both WebGL and Canvas 2D renderers. For better developer experience, we made more types generic and fixed some issues with types. Backwards incompatible changes Improved render order of decluttered items Decluttered items in Vector and VectorTile layers now maintain the render order of the layers and within a layer. They do not get lifted to a higher place in the stack any more. For most use cases, this is the desired behavior. If, however, you've been relying on the previous behavior, you now have to create separate layers above the layer stack, with just the styles for the declutter items. Removal of Map#flushDeclutterItems() It is no longer necessary to call this function to put layers above decluttered symbols and text, because decluttering no longer lifts elements above the layer stack. To upgrade, simply remove the code where you use the flushDeclutterItems() method. Changes in ol/style Removed the ol/style/RegularShape's radius1 property. Use radius for regular polygons or radius and radius2 for stars. Removed the shape-radius1 property from ol/style/flat~FlatShape. Use shape-radius instead. GeometryCollection constructor ol/geom/GeometryCollection can no longer be created without providing a Geometry array. Empty arrays are still valid. ol/interaction/Draw The finishDrawing() method now returns the drawn feature or null if no drawing could be finished. Previously it returned undefined. page: https://github.com/openlayers/openlayers/releases/tag/v9.0.0
  7. The Association for Geographic Information (AGI) and the Government Geography Profession (GGP) have agreed to work together to combine their experience, expertise and outreach to further the impact of geospatial data and technology within the public sector. By working together, they will help grow the geospatial community, and will build on recent activities such as the AGI’s Skills Roundtable. “The UK is at the forefront of geospatial. Now more than ever, geographers are combining increasing quantities of geospatial information with advances in technology, such as AI and ML, to drive new insights on our place in the world,” commented David Wood, Head of the Government Geography Profession. “The profession is leading the way in government and the public sector, recognising and encouraging the use of geography and geographical sciences within and across government. By working with the AGI, we can increase awareness and therefore engagement with geographers across government and align our ambitions and activities with the wider geospatial community.” “Many of government’s greatest challenges are time and place related and therefore the data and technology that will help address and resolve them must also have location at its heart,” added Adam Burke, Past Chair of the Association for Geographic Information. “By partnering with GGP, we can help ensure the geospatial ecosystem continues to grow sustainably, both within government and beyond, and is utilised across diverse industry sectors and across multiple applications to impact positive outputs.” AGI is the UK’s geospatial membership organisation; leading, connecting and developing a community of members who use and benefit from geographic information. An independent and impartial organisation, the AGI works with members and the wider community alongside government policy makers, delivers professional development and provides a lead for best practice across the industry. Its mission is to nurture, create and support a thriving community, actively supporting a sustainable future, and it aims to achieve this by nurturing and connecting active GI communities, supporting career and skills development and providing thought leadership to inspire future generations. The GGP, established in 2018, is made up of around 1,500 professional geographers in roles across the public sector. The profession is working ‘to create and grow a high-profile, proud and effective geography profession that attracts fresh talent and has a secure place at the heart of decision making’. This is being achieved by creating the environment for geographers to have maximum impact, professionalising and progressing the use applications of geography and growing a diverse and inclusive community within government and the wider public sector. page: https://www.directionsmag.com/pressrelease/12860
  8. Copernicus Open Access Hub is closing at the end of October 2023. Copernicus Sentinel data are now fully available in the Copernicus Data Space Ecosystem As previously announced in January the Copernicus Open Access Hub service continued its full operations until the end of June 2023, followed up by a gradual ramp-down phase until September 2023. The Copernicus Open Access Hub will be exceptionally extended for another month and will cease operations at the end of October 2023. To continue accessing Copernicus Sentinel data users will need to self-register on the new Copernicus Data Space Ecosystem. A guide for migration is available here. The new service offers access to a wide range of Earth observation data and services as well as new tools, GUI and APIs to help users explore and analyse satellite imagery. Discover more about the Copernicus Data Space Ecosystem at https://dataspace.copernicus.eu . A system of platforms to access EO data The Copernicus Data Space Ecosystem will be the main distribution platform for data from the EU Copernicus missions. Instant access to full and always up-to-date Earth observation data archives is supported by a new, more intuitive browser interface, the Copernicus Browser. Since 2015, the Copernicus Open Access Hub supports direct download of Sentinel satellite data for a wide range of operational applications by hundreds of thousands of users through the last decade. However, technology has moved on and the Copernicus Data Space Ecosystem was recently launched as a new system of platforms for accessing Sentinel data. As part of this process, the current access point will be gradually wound down from July 2023 and will no longer operate from end of October 2023. This post demonstrates how to migrate your workflow from accessing the data through the Copernicus Open Access Hub to using APIs via the Copernicus Data Space Ecosystem. In this post, we will show you how to: setup your credentials use OData to search the Catalog and download Sentinel-2 L2A Granules in .SAFE Format. search, discover and download gridded Sentinel-2 L2A data using the Process API Increase in data quality, quantity and accessibility With the glut of free and open data in recent years, the increases in revisit times and higher spatial and temporal resolutions, applications using earth observation data have blossomed. For example, before 2013, you would likely have used Landsat 8 data for land cover mapping with a revisit time of 16 days at 30m spatial resolution. In 2023 though, we now have access to Sentinel-2 with a revisit time of 3-5 days at 10m resolution enabling you not just to map land cover but monitor changes at higher spatial and temporal resolutions. Therefore, while it was feasible to download, process and analyse individual acquisitions in the past, this approach is no longer effective today and it makes more sense to process data in the cloud. This is where the new APIs provided by the Copernicus Data Space Ecosystem come in. official page: https://dataspace.copernicus.eu/
  9. this is indonesian language sub forum, and you may notice this topic is 5 years old
  10. any rough guess based on your experience?
  11. anyone knows the exact price for original license ENVI? how much in USD?
  12. Added support for data types: GRUS L1C, L2A - Axelspace micro-earth observation satellite ISIS3 - USGS Astrogeology ISIS Cube, Version 3 PDS4 -NASA Planetary Data System, Version 4 New Spectral Hourglass Workflow and N-Dimensional Visualizer New Target Detection Workflow The Target Detection Workflow has been added to this release. Use the Target Detection Workflow to locate objects within hyperspectral or multispectral images that match the signatures of in-scene regions. The targets may be a material or mineral of interest, or man-made objects. New Dynamic Band Selection tool New Material Identification tool Updated and improved Endmember Collection tool New and updated ENVI Toolbox tools The following tools have been updated to use new ENVI Tasks: Adaptive Coherence Estimator Classification: A classification method derived from the Generalized Likelihood Ratio (GLR) approach. The ACE is invariant to relative scaling of input spectra and has a Constant False Alarm Rate (CFAR) with respect to such scaling. Constrained Energy Minimization Classification: A classification method that uses a specific constraint, CEM uses a finite impulse response (FIR) filter to pass through the desired target while minimizing its output energy resulting from a background other than the desired targets. Classification Smoothing: Removes speckling noise from a classification image. It uses majority analysis to change spurious pixels within a large single class to that class. Forward Minimum Noise Fraction: Performs a minimum noise fraction (MNF) transform to determine the inherent dimensionality of image data, to segregate noise in the data, and to reduce the computational requirements for subsequent processing. Inverse Minimum Noise Fraction: Transforms the bands from a previous Forward Minimum Noise Fraction to their original data space. Orthogonal Subspace Projection Classification: This classification method first designs an orthogonal subspace projector to eliminate the response of non-targets, then Matched Filter is applied to match the desired target from the data. Parallelepiped Classification: Performs a parallelepiped supervised classification which uses a simple decision rule to classify multispectral data. Spectral Information Divergence Classification: A spectral classification method that uses a divergence measure to match pixels to reference spectra. New and updated ENVI Tasks You can use these new ENVI Tasks to perform data-processing operations in your own ENVI+IDL programs: ConstrainedEnergyMinimization: Performs the Constrained Energy Minimization (CEM) target analysis. InverseMNFTransform: Transforms the bands from a previous Forward Minimum Noise Fraction to their original data space. MixtureTunedRuleRasterClassification: Applies threshold and infeasibility values and performs classification on mixture tuned rule raster. MixtureTunedTargetConstrainedInterferenceMinimizedFilter: Performs the Mixture Tuned Target-Constrained Interference-Minimized Filter (MTTCIMF) target analysis. NormalizedEuclideanDistanceClassification: Performs a Normalized Euclidean Distance (NED) supervised classification. OrthogonalSubspaceProjection: Performs the Orthogonal Subspace Projection (OSP) target analysis. ParallelepipedClassification: This task performs a parallelepiped supervised classification which uses a simple decision rule to classify multispectral data. RuleRasterClassification: Creates a classification raster by thresholding on each band of the raster. SpectralInformationDivergenceClassification: Performs the Spectral Information Divergence (SID) classification. SpectralSimilarityMapperClassification: Performs a Spectral Similarity Mapper (SSM) supervised classification. TargetConstrainedInterferenceMinimizedFilter: Performs the Target-Constrained Interference-Minimized Filter (TCIMF) target analysis. ENVI performance improvements NITF updates Merged ENVI Crop Science Module into ENVI Enhanced support for ENVI Connect also you may check this presentation: https://www.nv5geospatialsoftware.com/Portals/0/pdfs/envi-6.0-idl-9.0-redefining-image-analysis-webinar.pdf
  13. Are you a post-doctoral researcher looking for an exciting opportunity in advanced Earth Observation (EO) for Earth Science? The ESA is offering a two-year research fellowship in the Directorate of Earth Observation Programmes. The fellowship will cover a wide range of innovative topics from the development and validation of novel methods, algorithms and EO products to innovative Earth system and climate research. The successful candidate will be responsible for undertaking advanced research addressing major observational gaps and scientific priorities in EO and Earth system science. The fellowship is open to all qualified candidates irrespective of gender, sexual orientation, ethnicity, beliefs, age, disability or other characteristics. Applications from women are encouraged. Apply by October 3, 2023 1.For more information please visit: http://geospatialsight.com/post-doctoral-research-fellowship-in-advanced-eo-for-earth-science/
  14. no, geomatics and engineer could do that, there are many software that already have deep learning function... for example, qgis already have plugin for deep learning, you can search on that in google, there are many paid software that can do that
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist