Jump to content
  • Chatbox

    You don't have permission to chat.
    Load More

    Geemap: A Python package to join forces of Google Earth Engine, ipyleaflet, and ipywidgets

    rahmansunbeam
    By rahmansunbeam,
    geemap is a Python package for interactive mapping with Google Earth Engine (GEE), which is a cloud computing platform with a multi-petabyte catalog of satellite imagery and geospatial datasets. During the past few years, GEE has become very popular in the geospatial community and it has empowered numerous environmental applications at local, regional, and global scales. GEE provides both JavaScript and Python APIs for making computational requests to the Earth Engine servers. Compared with the

    CHC Navigation introduces AT661 GNSS geodetic antenna

    Lurker
    By Lurker,
    CHC Navigation has released the AT661 geodetic antenna for GNSS networks or monitoring applications. The AT GNSS antenna series is the result of years of expertise in GNSS technologies. The compact geodetic GNSS antenna offers performances rivaling those of high-cost and bulky conventional GNSS choke ring antennas, according to CHC Navigation. The AT661’s supports all current and future GNSS signals, including GPS, GLONASS, BeiDou, Galileo, QZSS, IRNSS, SBAS and L-band. The antenna features

    We will need to perform GNSS occupations as part of a leveling project after 2022

    Lurker
    By Lurker,
    This column will address why users will be required to perform GNSS occupations when submitting a leveling project to the National Geodetic Survey (NGS) after 2022. It will highlight a section of NGS Blueprint for 2022, Part 3, “Working in the Modernized NSRS,” that discusses the process of performing leveling projects after 2022. My October 2017 column briefly discussed NGS’ preliminary plans for incorporating geodetic leveling data into the North American-Pacific Geopotential Datum of 2022 (NA

    10 New ArcGIS Pro Lesson Activities, Learn Paths, and Migration Reflections

    Lurker
    By Lurker,
    A new set of 10 ArcGIS Pro lessons empowers GIS practitioners, instructors, and students with essential skills to find, acquire, format, and analyze public domain spatial data to make decisions.  Described in this video, this set was created for 3 reasons:  (1) to provide a set of analytical lessons that can be immediately used, (2) to update the original 10 lessons created by my colleague Jill Clark and I to provide a practical component to our Esri Press book The GIS Guide to Public Domain Dat

    NASADEM: Creating a New NASA Digital Elevation Model and Associated Products

    Lurker
    By Lurker,
    Our objective is to provide the scientific and civil communities with a state-of-the-art global digital elevation model (DEM) derived from a combination of Shuttle Radar Topography Mission (SRTM) processing improvements, elevation control, void-filling and merging with data unavailable at the time of the original SRTM production: NASA SRTM DEMs created with processing improvements at full resolution NASA's Ice, Cloud,and land Elevation Satellite (ICESat)/Geoscience Laser Altimeter

Portal by DevFuse · Based on IP.Board Portal by IPS
  • Forum Statistics

    8.7k
    Total Topics
    43.2k
    Total Posts
  • Latest Posts

    • Hello folks! I am trying to use the Lyzenga Algorithm for estimating the depth of water in shallower areas, probably depths under 8-10 meters of lakes. First of all, how accurate is this algorithm in practice? Secondly, lets say i have the band values. can someone explain me how to retrieve those depths? I am following the "Lyzenga Algorithm for Shallow Water Mapping Using Multispectral Sentinel-2 Imageries in Gili Noko Waters" paper, but there are 3 steps of getting NDWIs, NDCIs and after filtering the Sun Glint Correction, then it finally comes to the depth calculating. it end ups in a formula of following: 𝑧 = 28.32 ∗ 𝑋1 − 36.25 ∗ 𝑋2 + 9.42 ∗ 𝑋3 + 16.35 x1 , x2, x3 are RGB values respectively. What do you guys think? can i just apply this formula ? if not, what is the purpose of all the previous steps, and will they change the rgb values anyways?
    • Hello folks! I am trying to use the Lyzenga Algorithm for estimating the depth of water in shallower areas, probably depths under 8-10 meters of lakes. First of all, how accurate is this algorithm in practice? Secondly, lets say i have the band values. can someone explain me how to retrieve those depths? I am following the "Lyzenga Algorithm for Shallow Water Mapping Using Multispectral Sentinel-2 Imageries in Gili Noko Waters" paper, but there are 3 steps of getting NDWIs, NDCIs and after filtering the Sun Glint Correction, then it finally comes to the depth calculating. it end ups in a formula of following: 𝑧 = 28.32 ∗ 𝑋1 − 36.25 ∗ 𝑋2 + 9.42 ∗ 𝑋3 + 16.35 x1 , x2, x3 are RGB values respectively. What do you guys think? can i just apply this formula ? if not, what is the purpose of all the previous steps, and will they change the rgb values anyways?
    • Understanding GIS Mapping GIS Mapping is a technology and process used to capture, store, analyze, manage, and visualize geographic or spatial data. It combines geographical information such as locations and terrain features, with various types of data like environmental, social, economic, and demographic information, to create detailed and layered maps. These maps are powerful tools for understanding and interpreting spatial relationships, patterns, and trends.   Components of GIS Mapping Key components of GIS mapping include:   1. Hardware. The hardware is the tangible aspect of GIS mapping technology. This includes computers, GPS devices, drones, and other equipment used to collect, process, and analyze geographic data.   2. Software. GIS mapping provides a platform for creating maps, conducting spatial analyses, and sharing geographic information.   3. Data. Spatial data is the core of GIS mapping. It encompasses information about specific locations, attributes, and relationships. This data can come from various sources, such as satellite imagery, surveys, government databases, or user-generated content.   4. People. Skilled individuals, such as GIS analysts, cartographers, geographers, and geospatial scientists, are essential for using GIS technology effectively. They design, develop, and apply GIS solutions to address specific problems or research questions.   GIS mapping allows users to perform a wide range of spatial analyses like measuring distances, determining optimal routes, assessing environmental changes, and identifying patterns within data. Therefore, it has a significant impact on humanitarian assistance and disaster preparedness and response. Now, what does this transformative impact look like?   How GIS Mapping Transforms Humanitarian Assistance It Enhances Disaster Response When disasters strike (and they usually do), whether they take the form of a natural catastrophe or a man-made crisis, every second counts. Key decision-makers therefore need adequate data and spatial information to respond proactively. This is where GIS mapping technology shines. Real-time data on the location and extent of a disaster, along with intricate details about affected areas and population distribution, enable aid agencies to make well-informed decisions, coordinate efforts, and manage resources effectively. Crucially, the ability to visualize and analyze information on a map empowers responders to prioritize their actions based on the most pressing needs. This ultimately saves lives.   GIS Technology Helps Map Vulnerable Populations In humanitarian work, the overarching goal is to help those who are most in need. Humanitarian assistance, therefore, relies heavily on the ability to identify and map ‘vulnerable’ populations. This is where GIS technologies play a crucial role. GIS mapping provides a powerful tool for identifying vulnerable populations, whether they are refugees fleeing conflict, communities at risk from disease outbreaks, or marginalized groups living in impoverished regions.   Therefore, by overlaying geographic data with information on poverty rates, access to healthcare and food security, aid workers can make informed decisions about where and how to allocate resources effectively. This targeted approach ensures that aid reaches the individuals and communities that require it the most.   GIS Mapping Provides Real-time Data One of the most remarkable features of GIS mapping in humanitarian aid is its ability to provide real-time data. This is usually in the form of satellite imagery. This capability is particularly crucial in disaster management, where timely and accurate information is of paramount importance. For example, during a hurricane, GIS technology can track the storm’s path, predict areas likely to be impacted and facilitate evacuation planning. It can also assess damage immediately after the event, thereby allowing for a rapid and well-coordinated response. This ‘bird’s eye view’ of disaster-affected areas equips humanitarian workers with the data needed to make informed decisions and deploy resources efficiently. Additionally, with real-time data, there’s flexibility in managing situations on the go.   GIS Mapping Helps Track and Monitor Epidemics and Disease Outbreaks GIS mapping plays a pivotal role in monitoring and controlling disease outbreaks. During epidemics such as the Ebola crisis in West Africa, GIS technology tracked the spread of the disease, identified hotspots of infection and helped health workers isolate cases and trace contacts. These insights were crucial in containment efforts and ultimately contributed to the control of the epidemic. By visualizing the geographic spread of the disease, humanitarian organizations could direct resources to the areas that needed them most, effectively limiting the outbreak’s reach.   Enroll in: GIS in Monitoring and Evaluation Course   It Enhances Disaster Risk Reduction and Management In the field of disaster management, preparedness is often the best form of defense. GIS mapping aids in identifying disaster-prone regions, allowing communities to plan for potential crises. By creating detailed hazard maps, which include flood risk assessments, earthquake-prone areas, and other environmental hazards, this technology helps in developing preparedness plans and mitigating the impact of disasters. The ability to visualize potential risks empowers communities to take proactive measures, such as reinforcing infrastructure, developing evacuation plans, and building resilient shelters.   Enroll in: GIS For WASH Programmes Course   Crowdsourced Mapping Crowdsourced mapping has proven to be a remarkable revelation to humanitarian aid. It’s a collaborative approach to creating and updating maps and geographic information using contributions from the general public. This method relies on the collective efforts of volunteers who provide geographic data, typically using digital tools.  Initiatives like OpenStreetMap have harnessed these efforts to contribute data on roads, buildings, and infrastructure in disaster-affected areas. This grassroots approach has been instrumental in improving the accuracy and completeness of maps in areas that were previously unmapped. Crucially, humanitarian organizations can then use this data for response efforts, making it a remarkable example of how technology and global collaboration can save lives. Therefore, this collective action not only aids in immediate response but also contributes to the resilience of affected communities. Click HERE to read more.      
    • Multimodal machine learning models have been surging in popularity, marking a significant evolution in artificial intelligence (AI) research and development. These models, capable of processing and integrating data from multiple modalities such as text, images, and audio, are of great importance due to their ability to tackle complex real-world problems that traditional unimodal models struggle with. The fusion of diverse data types enables these models to extract richer insights, enhance decision-making processes, and ultimately drive innovation. Among the burgeoning applications of multimodal machine learning, Visual Question Answering (VQA) models have emerged as particularly noteworthy. VQA models possess the capability to comprehend both images and accompanying textual queries, providing answers or relevant information based on the content of the visual input. This capability opens up avenues for interactive systems, enabling users to engage with AI in a more intuitive and natural manner.   However, despite their immense potential, the deployment of VQA models, especially in critical scenarios such as disaster recovery efforts, presents unique challenges. In situations where internet connectivity is unreliable or unavailable, deploying these models on tiny hardware platforms becomes essential. Yet the deep neural networks that power VQA models demand substantial computational resources, rendering traditional edge computing hardware solutions impractical. Inspired by optimizations that have enabled powerful unimodal models to run on tinyML hardware, a team led by researchers at the University of Maryland has developed a novel multimodal model called TinyVQA that allows extremely resource-limited hardware to run VQA models. Using some clever techniques, the researchers were able to compress the model to the point that it could run inferences in a few tens of milliseconds on a common low-power processor found onboard a drone. In spite of this substantial compression, the model was able to maintain acceptable levels of accuracy. To achieve this goal, the team first created a deep learning VQA model that is similar to other state of the art algorithms that have been previously described. This model was far too large to use for tinyML applications, but it contained a wealth of knowledge. Accordingly, the model was used as a teacher for a smaller student model. This practice, called knowledge distillation, captures much of the important associations found in the teacher model, and encodes them in a more compact form in the student model. In addition to having fewer layers and fewer parameters, the student model also made use of 8-bit quantization. This reduces both the memory footprint and the amount of computational resources that are required when running inferences. Another optimization involved swapping regular convolution layers out in favor of depthwise separable convolution layers — this further reduced model size while having a minimal impact on accuracy. Having designed and trained TinyVQA, the researchers evaluated it by using the FloodNet-VQA dataset. This dataset contains thousands of images of flooded areas captured by a drone after a major storm. Questions were asked about the images to determine how well the model understood the scenes. The teacher model, which weighs in at 479 megabytes, was found to have an accuracy of 81 percent. The much smaller TinyVQA model, only 339 kilobytes in size, achieved a very impressive 79.5 percent accuracy. Despite being over 1,000 times smaller, TinyVQA only lost 1.5 percent accuracy on average — not a bad trade-off at all! In a practical trial of the system, the model was deployed on the GAP8 microprocessor onboard a Crazyflie 2.0 drone. With inference times averaging 56 milliseconds on this platform, it was demonstrated that TinyVQA could realistically be used to assist first responders in emergency situations. And of course, many other opportunities to build autonomous, intelligent systems could also be enabled by this technology. source: hackster.io
  • Latest Topics

  • Recent Achievements

    • zysisal earned a badge
      Dedicated
    • qwerty earned a badge
      Week One Done
    • qwerty earned a badge
      One Month Later
    • qwerty earned a badge
      One Year In
    • Agha earned a badge
      Conversation Starter
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist