Jump to content

Search the Community

Showing results for tags 'Remote Sensing'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Feedback and Introduction
    • Off Topic
  • Geographic Information Science News and Discussions
    • General GIS Topic
    • GeoData
    • GeoHardware
    • WebGIS
    • Analysis and Geoprocessing
    • Cartography Design
    • Spatial Programming
    • Remote Sensing
    • Mobile GIS
    • Applied GIS
    • Jobs and Schedules
  • GIS Around The World
    • Language Specific Room

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Skype


MSN


Yahoo


AIM


ICQ


Jabber


Location


Interests

Found 37 results

  1. Hello members of GIS AREA ! Drones, also named UAV's or UAS's, are becoming increasingly popular for different applications. Me, I am interested in the field on drones applications for remote sensing, more exactly for aerial mapping, to produce orthophotos, Digital Terrain Models (DEM) and other products which are very useful for research or private businesses. I would like to know more about how to build a personal drone from 0, how to selected the best platform for the application of aerial mapping, how to fly the drone and how to process the datasets obtained from drones. I know that, on this forum, there are some people who are using drones and it would be great to share some knowledge, some practical information to others interested in UAV drones. Let's see what can we do. Is someone interested to participate to this ? My best regards, Arhanghelul
  2. This is a very interesting writing by Ned Horning, Center for Biodiversity and Conservation at the American Museum of Natural History. Take a look, *************************************************************************************************************************************** There is no doubt that satellite remote sensing has greatly improved our ability to conduct land cover mapping. There are limitations, however, and these are often understated or overlooked in the professional literature. The following statements are based on some common misconceptions related to using remote sensing for land cover mapping. The list is based on the author's first-hand experience, remote sensing literature, and presentations on remote sensing over the last 20 years. This list is meant primarily to alert new and inexperienced users of satellite image products to some of their limitations. Myth and Misconception #1: Satellites measure the reflectance of features on the ground. A satellite sensor measures radiance at the sensor itself, not surface reflectance from the target. In other words, the sensor is only measuring the intensity of light when it hits the detector surface. The units for this measurement are typically watts per steradian meter squared (Wm-2 sr-1). Surface reflectance is the ratio of the intensity of light reflected from a surface over the intensity of incident light. To measure reflectance we need to know the intensity of the light just before it hits the surface target and just as it is reflected from the target. Unfortunately, the orientation of the surface feature (usually due to slope and aspect) and atmospheric scattering and absorption complicates our ability to accurately measure surface reflectance (Figure 1).More information about these effects is presented in Myths and Misconceptions #2. It is important to understand this common misconception if one is to grasp one of the fundamental limitations of remotely sensed data. If satellite instruments could indeed measure surface reflectance, our job would be a lot easier, and our mapped land cover products would be much more accurate. Myth and Misconception #2: Land cover classification is a simple process that involves grouping an image's pixels based on the reflectance properties of the land cover feature being classified. Land cover mapping and, in fact, remote sensing interpretation, in general, are based on the assumption that features on the Earth have unique spectral signatures. The job of an image analyst is to use specialized software to group image pixels into appropriate land cover categories. These categories are based on the pixel values that we often assume to be directly related to the feature's reflectance. For example, all of the pixels that have values similar to pixels that we know are water would be classified as water, and all pixels that have values similar to pixels that we know are forest would be classified as forest, and so on, until the entire image is classified into one land cover type or another. This sounds pretty straightforward, but in practice it can be very difficult because we cannot easily determine the actual reflectance of a surface feature on the ground from a satellite image. One way to understand this problem is to contrast satellite remote sensing with reflectance measurements made in a laboratory. Important factors in determining reflectance are: Intensity of incoming radiation (i.e., the intensity of light energy as it hits the target), Intensity of the reflected radiation (i.e.,the intensity of light energy just after if leaves the target), and Orientation of the light source and detector relative to the target. In a laboratory setting it is relatively easy to determine the reflectance properties of a material because one can easily measure the intensity of the light energy when it hits an object (Figure 2). The light path is controlled, so not much energy is lost between the light source and the target or the target and the detector. Also, the illumination and target orientation are known with high accuracy and precision. In the world of satellite remote sensing, the situation is very different. We know the intensity of the light before it enters the Earth's atmosphere but as it passes through the atmosphere, it interacts with particulates (i.e., water vapor, dust, smoke) and significantly alters the signal before and after interacting with the target. A good deal of progress has been made in removing these atmospheric effects from an image, but we are still unable to easily and consistently remove these effects. As far as illumination and detector orientation are concerned, we can easily calculate the position of the sun and the satellite when the image was acquired. It is much more difficult, however, to know the orientation of the target (its slope and aspect). We can use digital elevation models to estimate these parameters, but this usually provides only a rough estimate of the target orientation. Consider, for example, an image with the same vegetation that is found in two locations, one in the shadow of a mountain and the other oriented so that the maximum amount of energy is reflected to the sensor (Figure 3). Under these circumstances, it can be very difficult to process the image in such a way that the two locations would have the same reflectance values. The bottom line is that similar land cover features can appear very different on a satellite image, and there can be a good deal of confusion between land cover classes that do not have drastically different reflectance signatures, such as different types of forests. This concept is discussed more in the section below titled "Landsat TM imagery is suitable to reliably and accurately classify natural vegetation genus and species information." Myth and Misconception #3: Spectral resolution is the number of bands (image layers) available from a particular sensor. Spectral resolution refers to the wavelength range that is measured by a particular image channel (Figure 4). For example, channel 1 of the ETM+ sensor on board the Landsat 7 satellite detects wavelengths from 0.45µm - 0.52µm. The pan band on the ETM+ sensor, detects wavelengths from 0.50µm - 0.90µm. Therefore, one could say that on the ETM+ sensor channel 1 has a finer spectral resolution than does the pan band. Although sensors with a lot of image bands (these are called hyperspectral sensors) usually have a fine spectral resolution, the number of channels on a sensor is not necessarily linked to spectral resolution. To see how band widths for different sensors compare go to the spectral curve interactive tool. Myth and Misconception #4: Measuring the differences between two land cover maps created from imagery acquired at different times will provide the most reliable estimate of land cover change. A common way to calculate land cover change is to compare the differences between two land cover maps that have been created with remotely sensed images from different dates. This is often called post-classification change detection. Although this method seems logical and it is commonly used, it is rarely the most appropriate method for determining land cover over time. The problem is that there are errors associated with each of the two land cover maps, and when these are overlaid, the errors are cumulative. As a result, the error of the land cover change map is significantly worse than either of the land cover maps. This concept is presented in the land cover change methods guide, which includes examples of more accurate ways to determine changes in land cover over time. One way to illustrate these errors is to have an image analyst classify an image and then a few days later have that same analyst classify the same image (Figure 5). Even a skilled analyst will produce different results. If you overlay these two results you would see perceived areas of change even though the same image was used for both dates. In other words, overlaying the two images produces a map illustrating the errors associated with the post-classification approach. Using images from the same area that were acquired at different dates or with a different sensor would likely increase the differences in the land cover classification even more. Myth and Misconception #5: Landsat TM imagery is suitable to consistently classify natural vegetation genus and species information with a high level of accuracy. This is a frequently debated topic in the remote sensing community, but the fact is that the more detailed you want your thematic classification to be, the less accurate will your results be for individual classes. In other words, if you create a land cover map with just forest and non-forest classes, the accuracy of each of those classes will be higher than if you try to break the forest and non-forest classes into several other classes. This is an important concept to grasp when you are developing a classification system for a land cover map. Ross Nelson, from the NASA Goddard Space Flight Center, has developed a rule of thumb based on careful examination of the accuracy of several studies published in the remote sensing literature. Of course exceptions can be found, but these are very useful guidelines. The underlying concept is that the more precise the class definitions are the lower the accuracy will be for the individual classes. Classification Accuracy: Forest/non-forest, water/no water, soil/vegetated: accuracies in the high 90%'s Conifer/hardwood: 80-90% Genus: 60-70% Species: 40-60% Note: If including a Digital Elevation Model (DEM) in the classification, add 10% Myth and Misconception #6: A satellite with a 30 meter resolution will allow you to identify 30m features. In remote sensing, spatial resolution is often defined as the size of an image pixel as it is projected on the ground. For example, Landsat ETM+ imagery is often processed to have a 30 meter pixel size. This means that a Landsat ETM+ pixel represents a square area of 30m x 30m on the ground. Spatial resolution is only one factor that allows us to identify features on an image. Other factors must be considered in order to understand which features can be identified on imagery of a particular resolution. These include: Contrast between adjacent features Heterogeneity of the landscape Level of information The contrast between a feature we would like to detect and the background cover type greatly influences the size of a feature we can detect. For example, if there is a 2-meter wide trail that is cut through a dense forest, there is a good chance that we would be able to detect that with a 30-meter pixel because of the stark contrast between the bright soil and the dark forest (Figure 6). On the other hand if there was a 1-hectare patch that was covered with small trees and shrubs it might look too similar to the background forest for us to differentiate it from the more mature forest. The heterogeneity of the landscape also can influence how well we can detect a feature. If the landscape is very heterogeneous due to terrain or land cover variations, it is more difficult to detect a feature than if the background is very homogeneous (Figure 7). Another issue related to the ability to resolve features on a landscape is the level of information that we can interpret in a given situation. There are different levels of information that we can gather about features on an image: detection, identification, and contextual information. At the most basic level, we are able to detect that there is a feature present that is different from its surrounding environment. The next level is to be able to identify that object. The final level is to be able to derive contextual information about a feature. These different levels of information can be illustrated in an example. If we see a few bright pixels in an image that is largely forested, we can say that we have detected something in the forest that doesn't appear to be the same as the surrounding forest. If those bright pixels are connected in a line, we might come to the conclusion that this is a road. If we see that the bright line through the forest runs from one town to another, we could conclude that this is a main road between these two towns. As the level of information we want to extract from an image increases, so does the number of pixels necessary to reach that level of information. In our example, we could have detected that something was different from the forest by seeing just a few pixels (i.e., less than 10 pixels). To come to the conclusion that it was a road we would need more pixels (i.e., 20-40 pixels). To understand the context of the road we would need hundreds of pixels. In general, therefore, the more information one wants to glean from an image, the greater the number of pixels are necessary to reach that level. The degree of the increase varies depending on the feature being monitored and the type of imagery used. Myth and Misconception #7: Using ortho-referenced images is always a better choice than using non-ortho-rectified images. There are three common geometric corrections that are applied to satellite images in an attempt to correct for geometric errors inherent in remotely sensed imagery. The simplest are systematic corrections in which distortions caused by the sensor and by the movement of the satellite are removed. An intermediate process is often called geo-referencing. This involves taking a systematically corrected image and then improving the geometric quality using ground control points (known locations that can be matched between the image and the ground) to increase the accuracy of the image's coordinate system. The most advanced process is called ortho-rectification, which corrects for distortions caused by terrain relief in addition to the systematic corrections. In an image with level terrain, a geo-corrected image is effectively the same as an ortho-corrected image. An ortho-rectified image is effectively an image map and can be used as a map base. In order to create an ortho-rectified image, it is necessary to use a digital elevation model (DEM) so the elevation of each pixel can be determined. For many regions of the world, DEMs with sufficient detail to ortho-rectify moderate- to high- resolution satellite imagery are not available to the general public. As a mapping product, an ortho-rectified image is superior to an image that has not been ortho-rectified because it has the best absolute accuracy. A problem can occur, however, when one wants to compare an ortho-rectified image to a non-ortho-rectified image. When comparing images, such as one would do when determining changes in land cover over time, the relative positional accuracy between images (how well the images are aligned) is more important than the absolute accuracy (how well the image matches a map base). Trying to match a geo-corrected or systematically corrected image to an ortho-corrected one can be a very difficult and often impractical task because of the complex distortions in an ortho-corrected image. When trying to achieve high relative accuracy between two or more satellite images it is often easiest to do so by geo-referencing systematically corrected images using one of the images as the reference image. In other words, if you had three systematically corrected images to superimpose, you would select one of them to be the reference and then, using standard remote sensing software, you would geo-reference the other two images to the reference image. Depending on the accuracy of ortho-corrected images, it may even be impractical to try and align two or more ortho-corrected images. If different methods were used to create the ortho-corrected, the distortions between the two images could be quite different, making the alignment process very difficult. The best scenario is to have all of the images ortho-corrected using the same process so that they have high absolute and relative accuracy, but that is often not practical when working with limited resources. The bottom line is that ortho-corrected images can have higher absolute accuracy, but when relative accuracy is needed it may be better to use only systematically corrected images rather than mix systematically-corrected images with ortho-corrected imagery. There is a tremendous archive of ortho-rectified Landsat TM images available for free from the University of Maryland's Global Land Cover Facility (GLCF), but care must be taken when using these data with other imagery. Myth and Misconception #8: Square pixels in an image accurately represent a square area on the Earth's surface. When we zoom in on a satellite image on the computer, we can clearly see square pixels, and it is easy to think that this is an accurate representation of what the sensor sees as the image is recorded. This isn't the case, however, because the sensor detecting energy for an individual pixel actually views a circle or ellipse on the ground. In addition to that, it is interesting to note that 50% or more of the information for a given pixel contains recorded energy from the surface area surrounding an individual pixel (Figure 8). This tends to contribute to fuzzy borders between objects rather than the crisp border you might expect. To visualize this effect, picture using the beam from a flashlight to represent what is seen by an individual sensor element (the part of a sensor that records the energy for an individual pixel). As the beam leaves the flashlight, it spreads out so that by the time it hits a target (a wall, for instance) the end of the beam is larger than the reflector inside the flashlight. If we tried to illuminate a square picture hanging on a wall, much of the light from the flashlight would actually fall outside of the picture. If the flashlight were recording light energy (rather than emitting it), it would record energy from the picture and the wall. It is also interesting to note that the beam is brighter toward the center. Again, if we think of the flashlight as a sensor, it would be recording more energy toward the center of the field of view than toward the edges. This concept is illustrated in the What a sensor sees interactive tool. The blurring effect of recording energy from adjacent pixels is amplified if the atmosphere is hazy over the feature being sensed. This blurring is due to particulates in the air that force the photons of energy (the light) to be bounced around instead of traveling in a straight line. The bouncing of electrons causes the sensor to record energy from an object that is not in the line of sight of an individual sensor element. This is the same effect that causes objects to appear a bit fuzzy when you look at them on a hazy or foggy day. ********************************************************************************************************************************************** Source with images, http://biodiversityinformatics.amnh.org/content.php?content_id=121
  3. Hi all, I'd like to share a tool I created to automate the calculation of spectral indices for remotely sensed imagery. Currently, 22 spectral indices and the following sensors are supported: Landsat 1-5 MSS Landsat 4-5 TM Landsat 7 ETM+ Landsat 8 OLI Worldview-02 MODIS Terra and Aqua Just run the Python script (arcpy or GDAL) and use the GUI interface to select your sensor, indices to calculate, stacked image, and output directory. I haven't shared the tool yet and thus it has not been extensively tested. My personal tests have worked fine, but be sure to validate your outputs just in case! Please let me know if you run into any issues or bugs. I'm happy to address any questions. More detailed information and download here: Remote Sensing Indices Derivation Tool Happy remote sensing folks, Ryan
  4. Dear All I want to use supervised image classification method in ENVI 5.3 and I want to link image in ENVI with google earth for accurate training data selection. I can't find this in ENVI. In Erdas Imagine this is really easy to link image with GE but in ENVI i can't find!
  5. Hi All, I am a Doctorate in Remote Sensing and GIS. I have created an online tutorial Geospatial Technologies primarily concentrating on practical aspects both for beginners and for advanced learners. I am sure this will be useful for the members. Please have a look at the following link. https://www.youtube.com/channel/UCK-8Ky7ZiohkOrHpe6EM1Lw Tutorials are totally free of cost and created just out of passion to help students and professionals in the field of Geospatial Technologies.
  6. Dear all I'm new in remote sensing and I have problems when try to extract built-up area from an image or during image classification. I am using landsat 7 ETM+ and TM images, when I use supervised or un-supervised methods for image classification, urban area (built-up area) are not recognizable from bare-soil. I read several papers about built-up Indexes and tried some of them but there is no tangible improvement in results. Please guide me as step by step process. Regards
  7. Hello, im having a hard time to find the difference between unmixing methods, and radiative transfer models. are they both used in canopy retrieval biophysical parameters ? are they both used in airborne image ?
  8. Hi all, does anyone knows a software (or alternatively an internet site) where you can search on multiple commercial satellite imagery catalogs? for example if i need a sat image of an area, for a particular date, how can i search in Airbus, DigitalGlobe, Deimos, Kari, and other commercial imagery providers in just one step? thanks
  9. Hello, I'm currently working with the MODIS product MOD13Q1 (Vegetation Indices 16-Day L3 Global 250m) and I ran into some difficulties with the interpretation of the quality assessment information. I already posted my question on stackexchange ( http://gis.stackexchange.com/questions/171792/how-to-correctly-parse-modis-qa-layers ), but got no answers. Maybe anybody here has an idea? If I receive an answer here, I will update the question on stackexchange (and vice-versa). Here's the text of the question (identical to stackexchange): I'd like to correctly parse the MOD13Q1 VI Quality layer (see Table 2 in "layers" here). For this, I'm trying to follow Example 1 from this tutorial (page 10). I'll illustrate my problem with a pixel value of "VI Quality" from my real data set: 35038 First, I transform it to the binary value 1000100011011110. Then, I separate this binary value into different bit words (from right to left) according to Table 2: 1 0 001 0 0 0 11 0111 10 Following Example 1 ("Please bear in mind that the binary bit-string is parsed from right to left, and the individual bits within a bit-word are read from left to right."), I would then assume that I'd get the following bit words for the different categories: MODLAND_QA: 10 VI usefulness: 0111 Aerosol quantity: 11 ... and so on However, the problem is that the value 0111 doesn't exist in Table 2. I guess that there are two possible explanations for this: I made a mistake trying to apply the tutorial. In that case I'd be very thankful if somebody could point me to the right direction. I applied the tutorial correctly, but Table 2 is incorrect or incomplete. Could anybody confirm if this might really be the case, and if yes, where to find information about missing values?
  10. Hello everyone, On behalf of the iEMSs Workshop A3 organisation team, I would like to bring your attention to the Workshop named "Long –Term Analysis of Socio-Ecosystems Under Climate Change: New Avenues for Integrated Modelling of Adaptation Processes". This workshop will take place at the 8th International Congress on Environmental Modelling and Software - iEMSs 2016 - in Toulouse, France, on July 10-14, 2016. Under the organisation of Prof. Carlo Giupponi and Dr. Paola Mercogliano, this workshop will focus on modelling and techniques (Remote Sensing - RS and Geographical Information Systems - GIS) for monitoring and assessing the interactions between exogenous drivers (climate change in particular) and the evolution of socio-ecosystems. A specific interest is for land cover/use change as a manifestation of the effects of climatic changes and their inter-linkages with ecosystems and human activities. The main aim of the workshop is exploring methodological avenues and technological solutions to go beyond state-of-the-art to analyse the vulnerability of socio-ecosystems to global change over the long term. The workshop aims at covering topics ranging from climatic modelling to natural disaster mapping, Remote Sensing for land cover and land use change analysis, and economic modelling of impacts. The workshop will contribute to develop new ideas about which opportunities are there for innovative approaches and how they could be integrated in studies like for example climate change adaptation analysis and strategy development. More information can be found at: http://www.iemss.org/sites/iemss2016/index.php We are looking forward to seeing you in France! -- Arthur Hrast Essenfelder PhD Candidate in Science and Management of Climate Change Università Ca' Foscari Venezia, Department of Economics Fondamenta San Giobbe, Cannaregio 873, 30123 Venezia e-mail: [email protected]
  11. Hello friends, I think geospatial software (commercial and free), user cases, and problem solving issues related to people who uses mac as operative system, would worth having. I will list only a few of them (the ones I use or am interested in) to show that out there it might be a potential pool of users interested of what gisarea has to offer: tutorials, software, and solutions based on users' knowledge and experiences. Here is the gis/remote sensing list of software available for mac: qgis saga spatialite grass gis otb/monteverdi2 envi/idl tntmips postgresql/postgis cartographica Usually, installation and troubleshooting of software on mac is quite different than windows, so having a topic forum dedicated to all things mac will help a lot of users. Also, for people that use ArcGIS virtualized on mac (as I do) would be great to have a place to go. Please fell free to share your thoughts! Cheers, GSQ
  12. Hi, guys. Guess what? Deadpool is here to be a superemotesensed hero. Okay, now, serious. My question may make you feel like want to stab me but I am now being serious. I am a remote sensing student, and now I am in my last year of my study. I wonder is there any remote sensing job for fresh graduates? Because everywhere I found almost nothing but GIS jobs. I like non photographic remote sensing so much, it is my passion. Anyway is there any chances in your countries to give a stranger foreigner fresh graduate a job? I am just wondering, because we are lowly paid in my country. Thank you. I will give you some Colossus' smile after you answer my questions.
  13. HI my names is Djo and nowdays I'm interested to continue my Master Degree and find out a little bit confused about which college I want to go in I'm interested to learn about: 1. Making GIS as startup 2. Big Data analytic 3. Mobile and Web based GIS 4. Location Analytic and Spatial Analytic 5. Business Intellegence 6. GIS as Decision Support System and Modelling 7. and also GIS application for Urban, Property, Retail, and marketing I've searched with Google and find similar course that some college like this http://www.fh-kaernten.at/en/degree-programs/engineering-it/overview/engineering-it/master/spatial-information-management/degree-programm-spatial-information-management/ https://www.kth.se/student/kurser/program/TGEGM/HT11/arskurs1?l=en do not mean to promote I need second opinion, Does anyone want to share about information related with Master degree that I'm looking for I would gladly to hear from you
  14. A 30x30 metre spatial resolution would be classed as a low resolution image? As when zooming in, it is hard to make out land covers easily, compared to a high resolution image where smaller land covers could be seen clearly such as roads, small streams and urban buildings.
  15. Hi, I am now trying to build a python based image processing simple application. However I found out that the perfect match library rsgislib is compatible in Unix like environment. I don't know how to use it like normal libraries in Windows environment. Is there any substitute library for rsgislib, or is there a way so that I could let it compatible with windows environment? rsgislib:http://www.rsgislib.org/ Sincerely,
  16. permisi salam kenal, saya dikky setiawan sedang membuat Tugas Akhir berkenaan dengan remote sensing Saya punya permasalah pada pembuatan peta tutupan lahan, dengan menggunakan software argis 10 dan erdas 8.5 1. saya sedang membuat peta perubahan tutupan lahan, antara tahun 1990, 2000, dan 2015 pertanyaanya, setelah saya calculate luas dalam bentuk luas (ha) kenapa setiap tahun total Luas nya berbeda pada kecamatan tersebut ? padahal saya sudah clip dengan benar, sebelum menghitung, contoh luas kecamatan 100 ha, setelah saya buat peta tutupan lahan, total nya tidak 100 ha, ada yang 90 ha, 80 ha, apakah piksel pada citra landsat mempengaruhi total luas tersebut ? 2. kenapa setiap pembuatan peta tutupan lahan, setiap orang berbeda- beda hasilnya ?? 3. bagaimana layout yang benar untuk peta tutupan lahan, point apa saja yang harus ditampilkan ? 4. apakah ada citra satelit dengan resolusi tinggi, selain google earth yang dapat didwnload secara gratis ? 5. apakah teman teman mempunyai monogram yang dapat dijadikan acuan dalam membuat sample jenis tutupan lahan pada citra landsat tm? untuk kelas 1. tambak 2. pemukiman 3. rawa 4. semak belukar 5. tanah kosong, contohnya seperti pemukiman, biasa berwarna pink pada piksel citra [lampirkan gambar] 6. saya punya masalah satu lagi, saya melakukan ground cek lapangan, untuk pemukiman warna piksel biasanya pink, suatu ketika saya membuka google earth pada daerah yang tidak terjangkau, saya melihat pada citra ada juga yang berwarna pink, saat melakukan penelusuran pada google earth daerah tersebut tidak ada pemukiman, yang ada hanya tanah kosong dengan tanaman pertanian, untuk itu saya minta bantuan teman teman yang mempunyai monogram yang dapat saya jadikan acuan pada daerah pesisir agar untuk memudahkan saya saja. trims, salam GIS Indonesia Dikky Setiawan ([email protected])
  17. down votefavorite i have a problem whem i am working to calculate the ndvi from landsat sattellite of rainy season i have assumed the pixel value above 15000 are clouds and calculate NDVI.then i am assigning the clouds pixel in ndvi 1. The main question is how to calculate zonal min max value of ndvi and percentage of cloud and percentage of ndvi greater than 0.5 in a particular zone.(zone are the vilage boundary of District). please help to solve this problem in arc gis.
  18. We have all seen, downloaded and used Landsat satellite imageries. Landsat, starting from its journey on july 23, 1972 is the longest running enterprise for collecting satellite images of the earth. Researchers around the globe know Landsat for its vast pool of information zipped in an archive and free to download anytime from internet. In this post let me go through the contents inside a level 1 image archive when you download one. When we download an image archive from USGS or USGS Glovis website from level 1 section, we use to download small section of its image archive stretching 170 km north-south by 183 km east-west. In the box Landsat images we download are not the direct product from the satellites, these images use to go through several revision, rectification and reorganization took place. In a archive you will meet these files, Naming convention The product name for different Landsat product varies for each set. For ETM+ scenes the names are like this, [Landsat-7 mission][ETM+ data format][path] [starting row]_[ending row][year][month][day]_B[bandnumber].TIF.gz Example: L71129032_03220060815_B10.TIF.gz, where L7 refers to the Landsat-7 mission, followed by “1” which refers to the ETM+ data format. 129 is the path of the scene, followed by 032_032 which are the starting and ending rows of the scene. The acquisition data is 2006-08-15, represented as 20060815. The band number for this file is B10 (band1). Other bands in the file may be referred to as B20 (band2), B30 (band3) and so on. B61 means band 6 low gain, B62 is band 6 high gain. For TM product it will be TM in the beginning, for OLI it is L8.The metadata for ETM+ scenes collection ends with ‘_MTL.TIF’ or ‘_MTL.txt’ extensions. The Metadata file Opening the _MTL.txt file with a text editor will show you all the information you need to start processing image. Within the group PRODUCT_METADATA you will find several declarations, The group of product origin, creation and initialization information The group of image metadata, within this group we find,PRODUCT_TYPE = “L1T”, this is the level 1 terrain corrected product ELEVATION_SOURCE = “GLS2000″, source of elevation data PROCESSING_SOFTWARE = “LPGS_11.1.0″, software used to process EPHEMERIS_TYPE = “DEFINITIVE”, Definitive Ephemeris is used for geometrically correcting Landsat data, and provides improved accuracy over predicted ephemeris SPACECRAFT_ID = “Landsat5″, of course… SENSOR_ID = “TM”, the Landsat thematic mapper sensor SENSOR_MODE = “BUMPER”, in early 2002, the TM instrument onboard Landsat 5 lost synchronization between the scan mirror and calibration shutter, resulting in “caterpillar tracks” on imagery. In order to fix this problem, the USGS switched the TM instrument from the scan angle monitor (SAM) mode to backup “bumper” mirror mode in order to extend the useful life of the TM instrument. ACQUISITION_DATE = 2010-01-30, the data the image was took SCENE_CENTER_SCAN_TIME = 04:15:40.9110630Z, time associated with the center of a scene center scan in World Reference System WRS_PATH = 137, path in World Reference System STARTING_ROW = 44, the row where it starts ENDING_ROW = 44, the row where it ends BAND_COMBINATION = “1234567”, combination of bands with numbers Upper-left, upper-right, lower-left, lower-right latitude and longitude for georeferencing Designation of other file name in the same archive Group of minimum and maximum radiance Group on minimum and maximum pixel value in the range of 0-255 Group of product parameter with gain and bias information for each band, sun azimuth, sun elevation and output format Group of correction information for different technical and physical errors Group of projection parameter This metadata format of Landsat 7 image of 2010, but if was not always like this. Here are changes you will see in the _MTL file time to time. Landsat processing standards in brief There are two type of processing system to shape the standard products, Level 1 Product Generation System (LPGS) – currently, all Landsat data is processed through LPGS. LPGS metadata is contained in a MTL.txt file and have these parameters, GeoTIFF output format Cubic Convolution (CC) resampling method 30-meter (TM, ETM+) and 60-meter (MSS) pixel size (reflective bands) Universal Transverse Mercator (UTM) map projection (Polar Stereographic projection for scenes with a center latitude greater than or equal to -63.0 degrees) World Geodetic System (WGS) 84 datum MAP (North-up) image orientation There are three types of Level 1 processing Standard Terrain Correction (Level 1T) – provides systematic radiometric and geometric accuracy by incorporating ground control points while employing a Digital Elevation Model (DEM) for topographic accuracy. Systematic Terrain Correction (Level 1Gt) – provides systematic, radiometric, and geometric accuracy, while employing a Digital Elevation Model (DEM) for topographic accuracy. Systematic Correction (Level 1G) – provides systematic radiometric and geometric accuracy, which is derived from data collected by the sensor and spacecraft. Another processing standard is the National Land Archive Production System (NLAPS). There are some available Landsat 4 and Landsat 5 data were processed through NLAPS. NLAPS metadata is contained in a .WO file, which accompanies all data files. While the products generated by the LPGS and NLAPS systems are similar, there are geometric, radiometric and data format differences. Geometric differences, where both systems align the bands to the center of each pixel. Before December 2008, LPGS aligned the bands to the center of each pixel, and NLAPS aligned bands to the edge of each pixel. Radiometric Differences, where both system scale Level 1 products to a range of 1-254. DN values of 0 are reserved for scan gap and flag fill. DN values of 255 are reserved for saturation. Data Format Differences, where both system uses GeoTIFF format. But The 60 meter and 15 meter NLAPS and LPGS products may have different image sizes. A very small number of Landsat TM scenes are processed using the National Land Archive Production System (NLAPS). Other files Some Landsat TM scenes include a Work Order (.WO) file that contains the metadata information about scenes processed on the National Land Archive Production System (NLAPS), common for Landsat 5 archives. A file containing the ground control points (GCP) used during image processing is also included with Landsat MSS and TM data for controlled georeferencing. Landsat 8 scenes include a Quality Assessment (QA) band file. Used effectively, the bits of the Quality Band improve the integrity of science investigations by indicating which pixels might be affected by instrument artifacts or subject to cloud contamination. Landsat MSS and TM scenes also include a Verify Image File (VER.jpg), which displays a grid of verification points in various colors that represent the accuracy of geometric correction. Cross-correlation techniques based on the GLS 2000 dataset are used as reference. This graphic representation of the Geometric Verify Report (VER.txt) assists users in determining the geometric accuracy of each MSS and TM scene. Landsat ETM+ SLC-off scenes also include Gap Mask files for each band. These ancillary data allow users identify the location of all pixels affected by the original data gaps in the primary SLC-off scene. Gap mask is a set of flat binary scan gap mask file which comes one per band. The README file contains a summary and brief description of the file contents and naming convention. This file contains further clarification of files and metadata of the archive contents, a general documentation and contact information. Also published here, https://clubgis.net/unboxing-landsat-l1-image-archive/
  19. Hi guys! I`m working on land cover change detection project of Kyiv province in Ukraine using Landsat data and ancillary information. I want to find change in Land Cover from 1990 to 2014 of Kyiv province using Decision Tree approach in ENVI. I have already done radiometric correction, atmospheric correction and apply cloud mask to whole Landsat`s scenes, and now I have question and request. Do I need to do mosaicking before classification or mosaicking classified image ? Does anybody have some decision rules for splitting pixels for appropriate land cover classes using Landsat data, DEM, VI? Thanks
  20. dianaz

    NASA task

    A short pre proposal to NASA is needed. The topic issue is how to create a new Southeast Florida for 2014 using imagery available from the orbital satellites. NASA is most interested in obtaining accurate land cover classes that depict water surfaces, woody vegetation cover, urban areas, and grass-dominated areas. What classification algorithms would you use and why? How would you incorporate training and validation data and what standards would you use to decide if the land cover products are acceptable? What might the limitation be of the satellite-based approach you are proposing?
  21. Hello, I need to develop an Ontology for remote sensing domain, for object recognition in High resolution Satellite imagery. Has anybody worked on developing Ontologies?
  22. Grateful if anyone can advise whether the tasseled cap transformation coefficients for SPOT 6 image have been derived or not? If not, are there any alternative methods for studying the "Wetness" parameters of the vegetated study area? Many thanks for your help.
  23. Another great news for the remote sensing community: SPOT 7 was launched from India and is now on orbit ! http://www.spacenews.com/article/launch-report/41070india%E2%80%99s-pslv-rocket-lofts-airbus-spot-7-satellite This will be a great source of satellite images (but not for free )
  24. My greeting to everyone and I need help from the members of our family. My problem is that. how can I extract high albedo and low albedow surfaces from the LANDSAT data. I need theses surfaces to create the area of Impervious surfaces for different periods. I am looking forward to an step by step reply.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.