Jump to content

All Activity

This stream auto-updates

  1. Last week
  2. Understanding GIS Mapping GIS Mapping is a technology and process used to capture, store, analyze, manage, and visualize geographic or spatial data. It combines geographical information such as locations and terrain features, with various types of data like environmental, social, economic, and demographic information, to create detailed and layered maps. These maps are powerful tools for understanding and interpreting spatial relationships, patterns, and trends. Components of GIS Mapping Key components of GIS mapping include: 1. Hardware. The hardware is the tangible aspect of GIS mapping technology. This includes computers, GPS devices, drones, and other equipment used to collect, process, and analyze geographic data. 2. Software. GIS mapping provides a platform for creating maps, conducting spatial analyses, and sharing geographic information. 3. Data. Spatial data is the core of GIS mapping. It encompasses information about specific locations, attributes, and relationships. This data can come from various sources, such as satellite imagery, surveys, government databases, or user-generated content. 4. People. Skilled individuals, such as GIS analysts, cartographers, geographers, and geospatial scientists, are essential for using GIS technology effectively. They design, develop, and apply GIS solutions to address specific problems or research questions. GIS mapping allows users to perform a wide range of spatial analyses like measuring distances, determining optimal routes, assessing environmental changes, and identifying patterns within data. Therefore, it has a significant impact on humanitarian assistance and disaster preparedness and response. Now, what does this transformative impact look like? How GIS Mapping Transforms Humanitarian Assistance It Enhances Disaster Response When disasters strike (and they usually do), whether they take the form of a natural catastrophe or a man-made crisis, every second counts. Key decision-makers therefore need adequate data and spatial information to respond proactively. This is where GIS mapping technology shines. Real-time data on the location and extent of a disaster, along with intricate details about affected areas and population distribution, enable aid agencies to make well-informed decisions, coordinate efforts, and manage resources effectively. Crucially, the ability to visualize and analyze information on a map empowers responders to prioritize their actions based on the most pressing needs. This ultimately saves lives. GIS Technology Helps Map Vulnerable Populations In humanitarian work, the overarching goal is to help those who are most in need. Humanitarian assistance, therefore, relies heavily on the ability to identify and map ‘vulnerable’ populations. This is where GIS technologies play a crucial role. GIS mapping provides a powerful tool for identifying vulnerable populations, whether they are refugees fleeing conflict, communities at risk from disease outbreaks, or marginalized groups living in impoverished regions. Therefore, by overlaying geographic data with information on poverty rates, access to healthcare and food security, aid workers can make informed decisions about where and how to allocate resources effectively. This targeted approach ensures that aid reaches the individuals and communities that require it the most. GIS Mapping Provides Real-time Data One of the most remarkable features of GIS mapping in humanitarian aid is its ability to provide real-time data. This is usually in the form of satellite imagery. This capability is particularly crucial in disaster management, where timely and accurate information is of paramount importance. For example, during a hurricane, GIS technology can track the storm’s path, predict areas likely to be impacted and facilitate evacuation planning. It can also assess damage immediately after the event, thereby allowing for a rapid and well-coordinated response. This ‘bird’s eye view’ of disaster-affected areas equips humanitarian workers with the data needed to make informed decisions and deploy resources efficiently. Additionally, with real-time data, there’s flexibility in managing situations on the go. GIS Mapping Helps Track and Monitor Epidemics and Disease Outbreaks GIS mapping plays a pivotal role in monitoring and controlling disease outbreaks. During epidemics such as the Ebola crisis in West Africa, GIS technology tracked the spread of the disease, identified hotspots of infection and helped health workers isolate cases and trace contacts. These insights were crucial in containment efforts and ultimately contributed to the control of the epidemic. By visualizing the geographic spread of the disease, humanitarian organizations could direct resources to the areas that needed them most, effectively limiting the outbreak’s reach. Enroll in: GIS in Monitoring and Evaluation Course It Enhances Disaster Risk Reduction and Management In the field of disaster management, preparedness is often the best form of defense. GIS mapping aids in identifying disaster-prone regions, allowing communities to plan for potential crises. By creating detailed hazard maps, which include flood risk assessments, earthquake-prone areas, and other environmental hazards, this technology helps in developing preparedness plans and mitigating the impact of disasters. The ability to visualize potential risks empowers communities to take proactive measures, such as reinforcing infrastructure, developing evacuation plans, and building resilient shelters. Enroll in: GIS For WASH Programmes Course Crowdsourced Mapping Crowdsourced mapping has proven to be a remarkable revelation to humanitarian aid. It’s a collaborative approach to creating and updating maps and geographic information using contributions from the general public. This method relies on the collective efforts of volunteers who provide geographic data, typically using digital tools. Initiatives like OpenStreetMap have harnessed these efforts to contribute data on roads, buildings, and infrastructure in disaster-affected areas. This grassroots approach has been instrumental in improving the accuracy and completeness of maps in areas that were previously unmapped. Crucially, humanitarian organizations can then use this data for response efforts, making it a remarkable example of how technology and global collaboration can save lives. Therefore, this collective action not only aids in immediate response but also contributes to the resilience of affected communities. Click HERE to read more.
  3. Earlier
  4. Multimodal machine learning models have been surging in popularity, marking a significant evolution in artificial intelligence (AI) research and development. These models, capable of processing and integrating data from multiple modalities such as text, images, and audio, are of great importance due to their ability to tackle complex real-world problems that traditional unimodal models struggle with. The fusion of diverse data types enables these models to extract richer insights, enhance decision-making processes, and ultimately drive innovation. Among the burgeoning applications of multimodal machine learning, Visual Question Answering (VQA) models have emerged as particularly noteworthy. VQA models possess the capability to comprehend both images and accompanying textual queries, providing answers or relevant information based on the content of the visual input. This capability opens up avenues for interactive systems, enabling users to engage with AI in a more intuitive and natural manner. However, despite their immense potential, the deployment of VQA models, especially in critical scenarios such as disaster recovery efforts, presents unique challenges. In situations where internet connectivity is unreliable or unavailable, deploying these models on tiny hardware platforms becomes essential. Yet the deep neural networks that power VQA models demand substantial computational resources, rendering traditional edge computing hardware solutions impractical. Inspired by optimizations that have enabled powerful unimodal models to run on tinyML hardware, a team led by researchers at the University of Maryland has developed a novel multimodal model called TinyVQA that allows extremely resource-limited hardware to run VQA models. Using some clever techniques, the researchers were able to compress the model to the point that it could run inferences in a few tens of milliseconds on a common low-power processor found onboard a drone. In spite of this substantial compression, the model was able to maintain acceptable levels of accuracy. To achieve this goal, the team first created a deep learning VQA model that is similar to other state of the art algorithms that have been previously described. This model was far too large to use for tinyML applications, but it contained a wealth of knowledge. Accordingly, the model was used as a teacher for a smaller student model. This practice, called knowledge distillation, captures much of the important associations found in the teacher model, and encodes them in a more compact form in the student model. In addition to having fewer layers and fewer parameters, the student model also made use of 8-bit quantization. This reduces both the memory footprint and the amount of computational resources that are required when running inferences. Another optimization involved swapping regular convolution layers out in favor of depthwise separable convolution layers — this further reduced model size while having a minimal impact on accuracy. Having designed and trained TinyVQA, the researchers evaluated it by using the FloodNet-VQA dataset. This dataset contains thousands of images of flooded areas captured by a drone after a major storm. Questions were asked about the images to determine how well the model understood the scenes. The teacher model, which weighs in at 479 megabytes, was found to have an accuracy of 81 percent. The much smaller TinyVQA model, only 339 kilobytes in size, achieved a very impressive 79.5 percent accuracy. Despite being over 1,000 times smaller, TinyVQA only lost 1.5 percent accuracy on average — not a bad trade-off at all! In a practical trial of the system, the model was deployed on the GAP8 microprocessor onboard a Crazyflie 2.0 drone. With inference times averaging 56 milliseconds on this platform, it was demonstrated that TinyVQA could realistically be used to assist first responders in emergency situations. And of course, many other opportunities to build autonomous, intelligent systems could also be enabled by this technology. source: hackster.io
  5. A new machine learning system can create height maps of urban environments from a single synthetic aperture radar (SAR) image, potentially accelerating disaster planning and response. Aerospace engineers at the University of the Bundeswehr in Munich claim their SAR2Height framework is the first to provide complete—if not perfect—three-dimensional city maps from a single SAR satellite. When an earthquake devastates a city, information can be in short supply. With basic services disrupted, it can difficult to assess how much damage occurred or where the need for humanitarian aid is greatest. Aerial surveys using laser ranging lidar systems provide the gold standard for 3D mapping, but such systems are expensive to buy and operate, even without the added logistical difficulties of a major disaster. Remote sensing is another option but optical satellite images are next to useless if the area is obscured by clouds or smoke. Synthetic aperture radar, on the other hand, works day or night, whatever the weather. SAR is an active sensor that uses the reflections of signals beamed from a satellite towards the Earth’s surface—the “synthetic aperture” part comes from the radar using the satellite’s own motion to mimic a larger antenna, to capture reflected signals with relatively long wavelengths. There are dozens of governmental and commercial SAR satellites orbiting the planet, and many can be tasked to image new locations in a matter of hours. However, SAR imagery is still inherently two-dimensional, and can be even trickier to interpret than photographs. This is partly due to an effect called radar layover where undamaged buildings appear to be toppling towards the sensor. “Height is a super complex topic in itself,” says Michael Schmitt, a professor at the University of the Bundeswehr. “There are a million definitions of what height is, and turning a satellite image into a meaningful height in a meaningful world geometry is a very complicated endeavor.” Schmitt and his colleague Michael Reclastarted by sourcing SAR images for 51 cities from the TerraSAR-X satellite, a partnership between the public German Aerospace Center and the private contractor Airbus Defence and Space. The researchers then obtained high quality height maps for the same cities, mostly generated by lidar surveys but some by planes or drones carrying stereo cameras. The next step was to make a one-to-one, pixel-to-pixel mapping between the height maps and the SAR images on which they could train a deep neural network. The results were amazing, says Schmitt. “We trained our model purely on TerraSAR-X imagery but out of the box, it works quite well on imagery from other commercial satellites.” He says the model, which takes only minutes to run, can predict the height of buildings in SAR images with an accuracy of around three meters—the height of a single story in a typical building. That means the system should be able to spot almost every building across a city that has suffered significant damage. Pietro Milillo, a professor of geosensing systems engineering at the University of Houston, hopes to use Schmitt and Recla’s model in an ongoing NASA-funded project on earthquake recovery. “We can go from a map of building heights to a map of probability of collapse of buildings,” he says. Later this month, Milillo intends to validate his application by visiting the site of an earthquake in Morocco last year that killed over 2,900 people. But the AI model is still far from perfect, warns Schmitt. It struggles to accurately predict the height of skyscrapers and is biased towards North American and European cities. This is because many cities in developing nations did not have regular lidar mapping flights to provide representational training data. The longer the gap between the lidar flight and the SAR images, the more buildings would have been built or replaced, and the less reliable the model’s predictions. Even in richer countries, “we’re really dependent on the slow revisit cycles of governments flying lidar missions and making the data publicly available,” says Carl Pucci, founder of EO59, a Virginia Beach, Va.-based company specializing in SAR software. “It just sucks. Being able to produce 3D from SAR alone would be really be a revolution.” Schmitt says the SAR2Height model now incorporates data from 177 cities and is getting better all time. “We are very close to reconstructing actual building models from single SAR images,” he says. “But you have to keep in mind that our method will never be as accurate as classic stereo or lidar. It will always remain a form of best guess instead of high-precision measurement.” source: ieee
  6. Satellite images analyzed by AI are emerging as a new tool in finding unmapped roads that bring environmental destruction to wilderness areas. James Cook University's Distinguished Professor Bill Laurance was co-author of a study analyzing the reliability of an automated approach to large-scale road mapping, using convolutional neural networks trained on road data, using satellite images. He said the Earth is experiencing an unprecedented wave of road building, with some 25 million kilometers of new paved roads expected by mid-century. "Roughly 90% of all road construction is occurring in developing nations including many tropical and subtropical regions of exceptional biodiversity. "By sharply increasing access to formerly remote natural areas, poorly regulated road development triggers dramatic increases in environmental disruption due to activities such as logging, mining and land clearing," said Professor Laurance. He said many roads in such regions, both legal and illegal, are unmapped, with road-mapping studies in the Brazilian Amazon, Asia-Pacific and elsewhere regularly finding up to 13 times more road length than reported in government or road databases. "Traditionally, road mapping meant tracing road features by hand, using satellite imagery. This is incredibly slow, making it almost impossible to stay on top of the global road tsunami," said Professor Laurance. The researchers trained three machine-learning models to automatically map road features from high-resolution satellite imagery covering rural, generally remote and often forested areas of Papua New Guinea, Indonesia and Malaysia. "This study shows the remarkable potential of AI for large-scale tasks like global road-mapping. We're not there yet, but we're making good progress," said Professor Laurance. "Proliferating roads are probably the most important direct threat to tropical forests globally. In a few more years, AI might give us the means to map and monitor roads across the world's most environmentally critical areas." journal: https://www.mdpi.com/2072-4292/16/5/839
  7. INTRODUCTION Hi members, Welcome to our training on GIS Mapping and Spatial Analysis in Early Warning Systems Course. This course in GIS Mapping and Spatial Analysis in Early Warning Systems has been developed to enhance the capacity of government officials, development partners, and stakeholders involved in development planning to mainstream disaster risk reduction into development activities and practices. DURATION 5 days. WHO SHOULD ATTEND This course is intended for audiences who form disaster management teams working in government agencies, NGOs, and Community Members working in governments, funding agencies, and research and non-government organizations for Emergency response and other Development programs. COURSE OBJECTIVES Upon completion of the course, participants will: · Understand operational mechanisms and procedures for the prediction, forecasting, monitoring, and response to the warning · Examine the kinds of tools and products that are available or could be developed to integrate information into forms most useful for them to make decisions at various levels and set up appropriate contingency plans or options to guide members of their organization against various hazards of different timescales. · Design end-to-end early warning systems for several types of hazards including, action planning for disaster preparedness, emergency management, and social response concerning early warning; · Develop tools for early warning audits, identify current gaps in existing early warning systems, and put in place enhanced people-centered early warning system · Harmonization of the early warning system and disaster mitigation for effective disaster risk reduction · Undertake risk assessment and design of multi-hazard end-to-end early warning systems for disaster risk reduction · Develop strategies to institutionalize early warning systems into the process cycle of disaster risk reduction and development planning, emergency response, and preparedness activities · Interpret and translate scientific information products into user-friendly formats and prepare & communicate tailor-made early warning information products to elicit a response from the at-risk communities · Design and implement community-based early warning systems that are people-centered and that can effectively contribute to the risk management process/risk reduction · Evaluate and introduce public education and training programs for community-based early warning systems · Apply emerging new generation climate prediction technologies for anticipating and managing disaster risks associated with climate change & variability ACCREDITATION Upon successful completion of this training, participants will be issued with an Indepth Research Institute (IRES) certificate certified by the National Industrial Training Authority (NITA). TRAINING VENUE The training will be held at the IRES Training Centre. The course fee covers the course tuition, training materials, two break refreshments, and lunch. All participants will additionally cater for their, travel expenses, visa application, insurance, and other personal expenses. Email: [email protected] or [email protected] Mob: +254 715 077 817/+250789621067 Register Today: GIS Mapping and Spatial Analysis in Early Warning Systems Course
  8. BAE Systems is celebrating alongside its customers at the Environmental Defense Fund (EDF) following the successful launch of the MethaneSAT satellite from Vandenberg Space Force Base in California today. The satellite will provide the public with reliable scientific data about the sources and scale of methane emissions globally, with the ultimate goal of driving reductions in the near future. MethaneSAT’s primary instrument includes a BAE Systems-built spectrometer that will identify and quantify methane emissions by measuring the narrow part of the infrared spectrum where the gas absorbs light reflected off the Earth MethaneSAT’s primary instrument includes a BAE Systems-built spectrometer that will identify and quantify methane emissions by measuring the narrow part of the infrared spectrum where the gas absorbs light reflected off the Earth. The satellite will monitor emissions from the oil and gas sector, which accounts for about 40% of all human-caused methane emissions, and it will be able to revisit the same sites daily in most instances. MethaneSAT will also fill a gap in existing remote methane monitoring capabilities, offering high-precision emissions mapping over a broad 200km by 200km field of view. This satellite will further complement existing methane-monitoring satellites that focus on either larger scales or detecting point sources. “MethaneSAT will make a critical difference in helping us better understand and remedy global greenhouse gas emissions,” said Dr. Alberto Conti, vice president and general manager of Civil Space for BAE Systems Space & Mission Systems. “MethaneSAT will advance our ability to identify and track emissions from their source, empowering stakeholders and the public with actionable data to enable reductions. We are thankful to our customers at the Environmental Defense Fund for developing this crucial mission, and we look forward to seeing all the change it will bring.” BAE Systems worked alongside scientists from EDF and MethaneSAT, LLC, to design and build the primary instrument. The company also led spacecraft integration, environmental testing, and will provide commissioning services. Once commissioning is complete, EDF will launch a cloud-based platform in partnership with Google to distribute MethaneSAT data publicly and free of charge, ensuring the data will be easily accessible for all. “MethaneSAT is a unique instrument with demanding specifications,” said Peter Vedder, senior director for mission systems at MethaneSAT. “It’s designed to see methane emissions that other satellites can’t, with unprecedented precision. BAE Systems helped us push the envelope to deliver a powerful new tool for protecting the climate.” MethaneSAT launched on a SpaceX Falcon 9 rocket.
  9. The National Research and Innovation Agency (BRIN) uses remote sensing technology to calculate the danger of faults as an effort to mitigate the threat of earthquakes for the public. A researcher from BRIN's Geological Disaster Research Center, Nurani Rahma Hanifa, stated that her side was collaborating with the British Geological Survey (BGS) in this technology research that has been published in a joint scientific research. "We hope this effort can reduce the fatality caused by earthquakes with the scientific data we have through remote sensing," she noted in a statement from her office on Thursday. Meanwhile, BGS' geologist in multi-hazard and remote sensing, Ekbal Hussain, stated that the technology, currently owned by BGS, can measure ground movement patterns and details of earthquake fragments after an earthquake occurred from space using remote sensing. "Through detailed modeling, this technology can help us to understand that earthquakes have energy released, but there is also energy stored in the earth," Hussain stated. Regarding the Lembang Fault, he explained that remote sensing can estimate the danger of the Lembang Fault by monitoring energy stored in the fault and how much of it will be released when an earthquake occurs. He expressed hope that the use of remote sensing technology would save several lives, considering that the earthquakes' vulnerability is dynamic. Head of BRIN's Geological Disaster Research Center, Adrin Tohari, stated that the Cianjur earthquake that struck in 2022 was interesting to be studied deeper. He noted that until now, the fault location is not yet discovered, but the impact of damage caused by the fault is quite extensive. In 2023, BRIN had conducted a study to determine the main earthquake location. However, the agency had not found the main earthquake's vein due to thick volcanic deposits. "Activity is difficult to detect," Tohari stated. He is optimistic that the implementation of remote sensing technology would improve the scientists' ability of understanding the potential and risks of the Lembang Fault in the Greater Bandung area.
  10. The European Space Agency (ESA) has greenlit the development of the NanoMagSat constellation, marking a significant advancement in the use of small satellites for scientific missions. NanoMagSat, a flagship mission spearheaded by Open Cosmos together with IPGP (Université Paris Cité, Institut de physique du globe de Paris, CNRS) and CEA-Léti, aims to revolutionise our understanding of Earth's magnetic field and ionospheric environment. As a follow on from ESA's successful Earth Explorer Swarm mission, NanoMagSat will use a constellation of three 16U satellites equipped with state-of-the-art instruments to monitor magnetic fields and ionospheric phenomena. This mission is joining the Scout family, a programme from ESA to deliver scientific small satellite missions within a budget of less than €35 million. The decision to proceed with NanoMagSat follows the successful completion of Risk Retirement Activities including the development of a 3m-long deployable boom and a satellite platform with exceptional magnetic cleanliness, key to ensuring state-of-the art magnetic accuracy. ESA’s Director of Earth Observation Programmes, Simonetta Cheli, said of this news: “We are very pleased to add two new Scouts to our Earth observation mission portfolio. These small science missions perfectly complement our more traditional existing and future Earth Explorer missions, and will bring exciting benefits to Earth.”
  11. Introduction The real estate industry encompasses a broad range of activities related to the buying, selling, renting, and development of properties. This may include land, residential, commercial, and industrial buildings. It is therefore a significant industry that plays a crucial role in both the economy and society. It comprises key stakeholders like real estate developers, agents, brokers, investors, property managers, and construction companies, all working together to meet the diverse needs of property owners. The real estate industry has long relied on location as a critical factor in property evaluation and investment decisions. This is not surprising considering that location is one of the most critical factors influencing the value and demand of a property. It’s therefore not uncommon within the real estate industry to come across the phrase “location, location, location”. But how do real estate planners and developers scout locations? After all, the world is too big a place to travel in every ideal place. This is where GIS comes in. So what exactly is GIS and how is it revolutionizing the real estate industry? Find that out in this guide. Understanding GIS Geographic Information Systems (GIS) is a powerful technology that deals with capturing, analyzing, storing, and managing geographical or spatial data. They allow users to visualize, interpret, and understand patterns and relationships in the geographic context. GIS integrates various data sources such as maps, satellite imagery, and aerial photographs to create intelligent and interactive visualizations. It is a mapping and spatial analysis tool. Because of its mapping and spatial capabilities and with real estate interlinked with location analysis, GIS is proving to be invaluable for real estate. It has unlocked a treasure trove of spatial data and analytical tools that enable real estate professionals to make data-driven decisions and streamline operations. How GIS is Revolutionizing the Real Estate Industry 1. It Aids Market Analysis with Precision For the most part, real estate is an investment. Like all investments, an analysis of the market is crucial. Information such as location, demographic analysis, and access to infrastructure is critical in making a buying decision. Crucially, GIS can provide this information. By overlaying multiple layers of spatial data including property values, demographics, and amenities, stakeholders can gain a comprehensive understanding of market trends and identify emerging opportunities. This spatial analysis allows investors to make informed decisions based on accurate data rather than relying solely on intuition. This information not only benefit property buyers but also developers 2. GIS Enables Smart Property Search When you think about property search, an image of moving around built-up areas comes to mind. While this is ubiquitous with property searches, it is time-consuming and cumbersome. Luckily, GIS redefines the way search is conducted. The traditional way of searching for properties has been replaced by GIS-powered property search platforms. Potential buyers can now explore properties based on specific location preferences, such as proximity to schools or public transport. Additionally, it allows real estate agents to provide interactive and visually engaging property maps. This therefore offers potential buyers a better understanding of the neighbourhood and its amenities. 3. Urban Planning and Smart Cities GIS plays a pivotal role in urban planning and development, promoting the concept of smart cities. By integrating GIS data with urban infrastructure, key stakeholders can optimize land usage, design sustainable neighborhoods, and develop efficient transportation networks. Therefore, this integration fosters sustainable urban spaces that cater to the needs of residents and businesses alike. 4. GIS Aids Risk Assessment and Mitigation Understanding and mitigating risks is paramount in real estate investments. This is ultimately crucial when making a buying decision. It’s therefore crucial that real estate investors and developers have the requisite tools to help them evaluate their risk appetites. GIS technology allows real estate professionals to assess potential hazards and environmental risks related to a property’s location. This includes flood zones, wildfire-prone areas, and seismic risks. Armed with this knowledge, investors can make better-informed decisions and implement necessary risk mitigation measures. 5. It Enhances Property Valuation GIS offers a data-centric approach to property valuation, moving beyond basic comparisons of similar properties in the vicinity. By factoring in various spatial data points such as crime rates and proximity to essential services, real estate professionals can provide more accurate and fair property valuations. This transparency therefore instils confidence in stakeholders, thereby leading to more successful transactions. 6. GIS Promotes Sustainability Worldwide, sustainability has become a core element of business operations. Real estate is not any different. As sustainability becomes a focal point in real estate development, GIS is instrumental in identifying environmentally sensitive areas and optimizing green building initiatives. It can determine the best locations for renewable energy projects and monitor the ecological impact of real estate projects. These insights can help key decision-makers implement sustainable practices that benefit both the environment and the industry. 7. It Improves Property Management GIS streamlines property management operations by providing valuable insights into tenant demographics, maintenance schedules, and occupancy patterns. The insights from this data can help real estate managers optimize maintenance routines and identify tenant preferences. This therefore leads to increased tenant satisfaction and better asset management. Key Take-Aways Geographic Information Systems (GIS) have revolutionized the real estate industry by providing an unprecedented level of spatial intelligence. From market analysis and smart property search to risk assessment and sustainable development, they have become an indispensable tool for real estate professionals looking to make data-driven decisions. As technology continues to evolve, it is expected to play an even more significant role in reshaping the future of the real estate sector. Embracing GIS is no longer an option but a necessity for real estate professionals aiming to stay competitive, innovate, and provide exceptional services in the dynamic landscape of the real estate industry. How to Harness the Power of GIS With IRES Are you involved in real estate and looking to stay ahead in a rapidly changing industry? Indepth Research Institute (IRES) is committed to empowering real estate practitioners like you with the knowledge and tools necessary to thrive in today’s dynamic property landscape. Our comprehensive upskilling programs on GIS are designed specifically to help real estate practitioners harness the power of GIS for effective and efficient property development and management. Don’t let the rapidly changing real estate landscape leave you behind. Register and be the best version of yourself! The Source of this Document:GIS and Remote Sensing Short Courses
  12. Leica Geosystems, part of Hexagon, introduces the Leica TerrainMapper-3 airborne LiDAR sensor, featuring new scan pattern configurability to support the widest variety of applications and requirements in a single system. Building upon Leica Geosystems’ legacy of LiDAR efficiency, the TerrainMapper-3 provides three scan patterns for superior productivity and to customise the sensor’s performance to specific applications. Circle scan patterns enhance 3D modelling of urban areas or steep terrains, while ellipse scan patterns optimise data capture for more traditional mapping applications. Skew ellipse scan patterns improve point density for infrastructures and corridor mapping applications. The sensor’s higher scan speed rate allows customers to fly the aircraft faster while maintaining the highest data quality, and the 60-degrees adjustable field of view maximises data collection with fewer flight lines. The TerrainMapper-3 is further complemented by the Leica MFC150 4-band camera, operating with the same 60-degree field of view coverage as the LiDAR for exact data consistency. Thanks to reduced beam divergence, the TerrainMapper-3 provides improved planimetric accuracy, while new MPiA (Multiple Pulses in Air) handling guarantees more consistent data acquisition, even in steep terrain, providing users with unparalleled reliability and precision. The new system introduces possibilities for real-time full waveform recording at maximum pulse rate, opening new opportunities for advanced and automated point classification. The TerrainMapper-3 seamlessly integrates with Leica HxMap end-to-end processing workflow, supporting users from mission planning to product generation to extract the greatest value from the collected data.
  13. How Upskilling in GIS Aids Educational Policy Research Understanding the distribution of student demographics is crucial for making informed decisions. This is where Geographic Information Systems (GIS) and remote sensing technologies play a vital role. By upskilling in GIS, researchers can harness the power of spatial analysis and mapping to gain valuable insights into student populations and educational disparities. GIS is a powerful tool that allows researchers to visualize, analyze, and interpret data in a spatial context. By integrating demographic data with geographic information, researchers can create detailed maps that highlight patterns and trends in student populations. Mapping student demographics enables policymakers and educators to identify areas with high concentrations of specific demographic groups, such as low-income students, English language learners, or students with disabilities. This information can inform targeted interventions and resource allocation to address educational inequities. Remote sensing, on the other hand, involves the collection of data from a distance, typically using satellite imagery or aerial photography. This technology provides researchers with a wealth of information about the physical characteristics of an area, such as land cover, vegetation density, and infrastructure. By combining remote sensing data with demographic information, researchers can gain insights into the relationship between the physical environment and educational outcomes. For example, they can examine how proximity to green spaces or access to transportation infrastructure affects student performance and attendance. Furthermore, GIS and remote sensing can help researchers analyze the spatial distribution of educational resources and facilities. By mapping school locations, transportation routes, and student residences, researchers can identify areas that lack access to quality education or suffer from transportation barriers. This information can guide the development of policies that promote educational equity and improve school planning. To effectively utilize GIS and remote sensing in educational policy research, upskilling is essential. Researchers should acquire proficiency in GIS software, such as ArcGIS or QGIS, to manipulate and analyze spatial data. They should also learn how to integrate remote sensing data into their analyses, using tools like Google Earth Engine or ENVI. Additionally, understanding spatial statistics and geospatial modeling techniques can enhance the depth and accuracy of research findings. In conclusion, upskilling in GIS and remote sensing offers significant benefits to educational policy research, particularly in mapping student demographics. By leveraging these technologies, researchers can gain valuable insights into the spatial distribution of student populations, educational disparities, and the impact of the physical environment on educational outcomes. With this information, policymakers and educators can make evidence-based decisions to promote educational equity and improve the quality of education for all students.
  14. Due to their ability to collect tree phenotypic trait data in large quantities, unmanned aerial vehicles, or UAVs, have completely changed the forestry industry. Even with the progress made in object detection and remote sensing, precise identification and extraction of spectral data for individual trees continue to be major obstacles, frequently necessitating tedious manual annotation. For better tree detection, current research focuses on developing segmentation algorithms and convolutional neural networks; however, the requirement for precise manual labeling prevents these technologies from being widely adopted. This emphasizes how critical it is to create a higher-throughput, more effective technique for automatically extracting spectral information for individual trees. The open-source tool ExtSpecR, which offers an intuitive interactive web application, is presented in this paper as a means of achieving single tree spectral extraction in forestry using UAV-based imagery. It optimizes the process of spectral and spatial feature extraction by speeding up the identification and annotation of individual trees. Users can calculate vegetation indices and view outputs as false-color and VI-specific images by uploading TIFF-formatted spectral images through the ExtSpecR user interface. Users upload point cloud data and multispectral images to the interactive dashboard, which then defines the region of interest (ROI) for tree identification and segmentation, enabling the system's core phenotyping capabilities. This procedure produces 3D visualizations of the segmented trees by utilizing the lidR package's "locate_trees" function. Evaluation of ExtSpecR's performance in comparison to ground truth in tree plantations with different canopy densities shows that it can detect individual trees with accuracy ranging from 91% to 97%. By comparing ExtSpecR's functionality to that of other tools, its distinct approach of fusing point cloud data and multispectral imagery with already-existing algorithms for optimal user experience and thorough tree analysis is highlighted. For better outcomes, recommendations include segmenting point cloud data and defining specific target areas, even though it faces difficulties with large input data sizes and complex environments with overlapping canopies. Further improvements, according to the paper, ought to focus on raising cloud quality and assessing effectiveness using hyperspectral imagery and LiDAR point clouds. page: GitHub - Yanjie-Li/ExtSpecR: Tree detection, segementation and spectral extraction
  15. The 9.0 release add several new features, including a Google Maps source (finally!), improved WebGL line rendering, and a new symbol and text decluttering implementation. We also improved and broadened flat styles support for both WebGL and Canvas 2D renderers. For better developer experience, we made more types generic and fixed some issues with types. Backwards incompatible changes Improved render order of decluttered items Decluttered items in Vector and VectorTile layers now maintain the render order of the layers and within a layer. They do not get lifted to a higher place in the stack any more. For most use cases, this is the desired behavior. If, however, you've been relying on the previous behavior, you now have to create separate layers above the layer stack, with just the styles for the declutter items. Removal of Map#flushDeclutterItems() It is no longer necessary to call this function to put layers above decluttered symbols and text, because decluttering no longer lifts elements above the layer stack. To upgrade, simply remove the code where you use the flushDeclutterItems() method. Changes in ol/style Removed the ol/style/RegularShape's radius1 property. Use radius for regular polygons or radius and radius2 for stars. Removed the shape-radius1 property from ol/style/flat~FlatShape. Use shape-radius instead. GeometryCollection constructor ol/geom/GeometryCollection can no longer be created without providing a Geometry array. Empty arrays are still valid. ol/interaction/Draw The finishDrawing() method now returns the drawn feature or null if no drawing could be finished. Previously it returned undefined. page: https://github.com/openlayers/openlayers/releases/tag/v9.0.0
  16. The Association for Geographic Information (AGI) and the Government Geography Profession (GGP) have agreed to work together to combine their experience, expertise and outreach to further the impact of geospatial data and technology within the public sector. By working together, they will help grow the geospatial community, and will build on recent activities such as the AGI’s Skills Roundtable. “The UK is at the forefront of geospatial. Now more than ever, geographers are combining increasing quantities of geospatial information with advances in technology, such as AI and ML, to drive new insights on our place in the world,” commented David Wood, Head of the Government Geography Profession. “The profession is leading the way in government and the public sector, recognising and encouraging the use of geography and geographical sciences within and across government. By working with the AGI, we can increase awareness and therefore engagement with geographers across government and align our ambitions and activities with the wider geospatial community.” “Many of government’s greatest challenges are time and place related and therefore the data and technology that will help address and resolve them must also have location at its heart,” added Adam Burke, Past Chair of the Association for Geographic Information. “By partnering with GGP, we can help ensure the geospatial ecosystem continues to grow sustainably, both within government and beyond, and is utilised across diverse industry sectors and across multiple applications to impact positive outputs.” AGI is the UK’s geospatial membership organisation; leading, connecting and developing a community of members who use and benefit from geographic information. An independent and impartial organisation, the AGI works with members and the wider community alongside government policy makers, delivers professional development and provides a lead for best practice across the industry. Its mission is to nurture, create and support a thriving community, actively supporting a sustainable future, and it aims to achieve this by nurturing and connecting active GI communities, supporting career and skills development and providing thought leadership to inspire future generations. The GGP, established in 2018, is made up of around 1,500 professional geographers in roles across the public sector. The profession is working ‘to create and grow a high-profile, proud and effective geography profession that attracts fresh talent and has a secure place at the heart of decision making’. This is being achieved by creating the environment for geographers to have maximum impact, professionalising and progressing the use applications of geography and growing a diverse and inclusive community within government and the wider public sector. page: https://www.directionsmag.com/pressrelease/12860
  17. As technology advances and AI becomes more sophisticated, there is a growing concern that GIS analysts might be replaced by AI algorithms. What are your thoughts on this potential shift? Will AI be able to match the expertise and intuition of human analysts in the field of Geographic Information Science?
  18. Copernicus Open Access Hub is closing at the end of October 2023. Copernicus Sentinel data are now fully available in the Copernicus Data Space Ecosystem As previously announced in January the Copernicus Open Access Hub service continued its full operations until the end of June 2023, followed up by a gradual ramp-down phase until September 2023. The Copernicus Open Access Hub will be exceptionally extended for another month and will cease operations at the end of October 2023. To continue accessing Copernicus Sentinel data users will need to self-register on the new Copernicus Data Space Ecosystem. A guide for migration is available here. The new service offers access to a wide range of Earth observation data and services as well as new tools, GUI and APIs to help users explore and analyse satellite imagery. Discover more about the Copernicus Data Space Ecosystem at https://dataspace.copernicus.eu . A system of platforms to access EO data The Copernicus Data Space Ecosystem will be the main distribution platform for data from the EU Copernicus missions. Instant access to full and always up-to-date Earth observation data archives is supported by a new, more intuitive browser interface, the Copernicus Browser. Since 2015, the Copernicus Open Access Hub supports direct download of Sentinel satellite data for a wide range of operational applications by hundreds of thousands of users through the last decade. However, technology has moved on and the Copernicus Data Space Ecosystem was recently launched as a new system of platforms for accessing Sentinel data. As part of this process, the current access point will be gradually wound down from July 2023 and will no longer operate from end of October 2023. This post demonstrates how to migrate your workflow from accessing the data through the Copernicus Open Access Hub to using APIs via the Copernicus Data Space Ecosystem. In this post, we will show you how to: setup your credentials use OData to search the Catalog and download Sentinel-2 L2A Granules in .SAFE Format. search, discover and download gridded Sentinel-2 L2A data using the Process API Increase in data quality, quantity and accessibility With the glut of free and open data in recent years, the increases in revisit times and higher spatial and temporal resolutions, applications using earth observation data have blossomed. For example, before 2013, you would likely have used Landsat 8 data for land cover mapping with a revisit time of 16 days at 30m spatial resolution. In 2023 though, we now have access to Sentinel-2 with a revisit time of 3-5 days at 10m resolution enabling you not just to map land cover but monitor changes at higher spatial and temporal resolutions. Therefore, while it was feasible to download, process and analyse individual acquisitions in the past, this approach is no longer effective today and it makes more sense to process data in the cloud. This is where the new APIs provided by the Copernicus Data Space Ecosystem come in. official page: https://dataspace.copernicus.eu/
  19. this is indonesian language sub forum, and you may notice this topic is 5 years old
  20. Can you please tell me what kind of bands you're making, what materials you're using, and what specific colors you're trying to achieve? With more information, I can provide more specific and helpful instructions. In the meantime, here are a few general tips for making natural-colored bands: Use natural materials like wool, hemp, or cotton. Dye the materials with natural dyes like plants, berries, or minerals. Use different shades of the same color to create a more natural look. Blend different colors together to create a more subtle effect. I hope this helps!
  21. any rough guess based on your experience?
  22. Interstingly, ENVI has been wildly successful for many people. After its release in 1994, it was acquired by ITT Corp in 2011, then handed to Exelis Inc, then Harris Corp, then L3 Technologies as L3Harris, and finally NV5 Global. The core is IDL which remained unchanged, but everything has refined and fine-tuned all the way. This depends on the vendor and license type.
  23. anyone knows the exact price for original license ENVI? how much in USD?
  24. Interesting update! I see they introduced IDL for VSCode as well, also an 'IDL Notebook'. I will give these new tools some time to mature before a proper test drive.
  25. Medicine of Envi 6 please!!!! 😇
  26. Added support for data types: GRUS L1C, L2A - Axelspace micro-earth observation satellite ISIS3 - USGS Astrogeology ISIS Cube, Version 3 PDS4 -NASA Planetary Data System, Version 4 New Spectral Hourglass Workflow and N-Dimensional Visualizer New Target Detection Workflow The Target Detection Workflow has been added to this release. Use the Target Detection Workflow to locate objects within hyperspectral or multispectral images that match the signatures of in-scene regions. The targets may be a material or mineral of interest, or man-made objects. New Dynamic Band Selection tool New Material Identification tool Updated and improved Endmember Collection tool New and updated ENVI Toolbox tools The following tools have been updated to use new ENVI Tasks: Adaptive Coherence Estimator Classification: A classification method derived from the Generalized Likelihood Ratio (GLR) approach. The ACE is invariant to relative scaling of input spectra and has a Constant False Alarm Rate (CFAR) with respect to such scaling. Constrained Energy Minimization Classification: A classification method that uses a specific constraint, CEM uses a finite impulse response (FIR) filter to pass through the desired target while minimizing its output energy resulting from a background other than the desired targets. Classification Smoothing: Removes speckling noise from a classification image. It uses majority analysis to change spurious pixels within a large single class to that class. Forward Minimum Noise Fraction: Performs a minimum noise fraction (MNF) transform to determine the inherent dimensionality of image data, to segregate noise in the data, and to reduce the computational requirements for subsequent processing. Inverse Minimum Noise Fraction: Transforms the bands from a previous Forward Minimum Noise Fraction to their original data space. Orthogonal Subspace Projection Classification: This classification method first designs an orthogonal subspace projector to eliminate the response of non-targets, then Matched Filter is applied to match the desired target from the data. Parallelepiped Classification: Performs a parallelepiped supervised classification which uses a simple decision rule to classify multispectral data. Spectral Information Divergence Classification: A spectral classification method that uses a divergence measure to match pixels to reference spectra. New and updated ENVI Tasks You can use these new ENVI Tasks to perform data-processing operations in your own ENVI+IDL programs: ConstrainedEnergyMinimization: Performs the Constrained Energy Minimization (CEM) target analysis. InverseMNFTransform: Transforms the bands from a previous Forward Minimum Noise Fraction to their original data space. MixtureTunedRuleRasterClassification: Applies threshold and infeasibility values and performs classification on mixture tuned rule raster. MixtureTunedTargetConstrainedInterferenceMinimizedFilter: Performs the Mixture Tuned Target-Constrained Interference-Minimized Filter (MTTCIMF) target analysis. NormalizedEuclideanDistanceClassification: Performs a Normalized Euclidean Distance (NED) supervised classification. OrthogonalSubspaceProjection: Performs the Orthogonal Subspace Projection (OSP) target analysis. ParallelepipedClassification: This task performs a parallelepiped supervised classification which uses a simple decision rule to classify multispectral data. RuleRasterClassification: Creates a classification raster by thresholding on each band of the raster. SpectralInformationDivergenceClassification: Performs the Spectral Information Divergence (SID) classification. SpectralSimilarityMapperClassification: Performs a Spectral Similarity Mapper (SSM) supervised classification. TargetConstrainedInterferenceMinimizedFilter: Performs the Target-Constrained Interference-Minimized Filter (TCIMF) target analysis. ENVI performance improvements NITF updates Merged ENVI Crop Science Module into ENVI Enhanced support for ENVI Connect also you may check this presentation: https://www.nv5geospatialsoftware.com/Portals/0/pdfs/envi-6.0-idl-9.0-redefining-image-analysis-webinar.pdf
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.

Disable-Adblock.png

 

If you enjoy our contents, support us by Disable ads Blocker or add GIS-area to your ads blocker whitelist