Leaderboard
Popular Content
Showing content with the highest reputation on 05/06/2014 in all areas
-
This is a very interesting writing by Ned Horning, Center for Biodiversity and Conservation at the American Museum of Natural History. Take a look, *************************************************************************************************************************************** There is no doubt that satellite remote sensing has greatly improved our ability to conduct land cover mapping. There are limitations, however, and these are often understated or overlooked in the professional literature. The following statements are based on some common misconceptions related to using remote sensing for land cover mapping. The list is based on the author's first-hand experience, remote sensing literature, and presentations on remote sensing over the last 20 years. This list is meant primarily to alert new and inexperienced users of satellite image products to some of their limitations. Myth and Misconception #1: Satellites measure the reflectance of features on the ground. A satellite sensor measures radiance at the sensor itself, not surface reflectance from the target. In other words, the sensor is only measuring the intensity of light when it hits the detector surface. The units for this measurement are typically watts per steradian meter squared (Wm-2 sr-1). Surface reflectance is the ratio of the intensity of light reflected from a surface over the intensity of incident light. To measure reflectance we need to know the intensity of the light just before it hits the surface target and just as it is reflected from the target. Unfortunately, the orientation of the surface feature (usually due to slope and aspect) and atmospheric scattering and absorption complicates our ability to accurately measure surface reflectance (Figure 1).More information about these effects is presented in Myths and Misconceptions #2. It is important to understand this common misconception if one is to grasp one of the fundamental limitations of remotely sensed data. If satellite instruments could indeed measure surface reflectance, our job would be a lot easier, and our mapped land cover products would be much more accurate. Myth and Misconception #2: Land cover classification is a simple process that involves grouping an image's pixels based on the reflectance properties of the land cover feature being classified. Land cover mapping and, in fact, remote sensing interpretation, in general, are based on the assumption that features on the Earth have unique spectral signatures. The job of an image analyst is to use specialized software to group image pixels into appropriate land cover categories. These categories are based on the pixel values that we often assume to be directly related to the feature's reflectance. For example, all of the pixels that have values similar to pixels that we know are water would be classified as water, and all pixels that have values similar to pixels that we know are forest would be classified as forest, and so on, until the entire image is classified into one land cover type or another. This sounds pretty straightforward, but in practice it can be very difficult because we cannot easily determine the actual reflectance of a surface feature on the ground from a satellite image. One way to understand this problem is to contrast satellite remote sensing with reflectance measurements made in a laboratory. Important factors in determining reflectance are: Intensity of incoming radiation (i.e., the intensity of light energy as it hits the target), Intensity of the reflected radiation (i.e.,the intensity of light energy just after if leaves the target), and Orientation of the light source and detector relative to the target. In a laboratory setting it is relatively easy to determine the reflectance properties of a material because one can easily measure the intensity of the light energy when it hits an object (Figure 2). The light path is controlled, so not much energy is lost between the light source and the target or the target and the detector. Also, the illumination and target orientation are known with high accuracy and precision. In the world of satellite remote sensing, the situation is very different. We know the intensity of the light before it enters the Earth's atmosphere but as it passes through the atmosphere, it interacts with particulates (i.e., water vapor, dust, smoke) and significantly alters the signal before and after interacting with the target. A good deal of progress has been made in removing these atmospheric effects from an image, but we are still unable to easily and consistently remove these effects. As far as illumination and detector orientation are concerned, we can easily calculate the position of the sun and the satellite when the image was acquired. It is much more difficult, however, to know the orientation of the target (its slope and aspect). We can use digital elevation models to estimate these parameters, but this usually provides only a rough estimate of the target orientation. Consider, for example, an image with the same vegetation that is found in two locations, one in the shadow of a mountain and the other oriented so that the maximum amount of energy is reflected to the sensor (Figure 3). Under these circumstances, it can be very difficult to process the image in such a way that the two locations would have the same reflectance values. The bottom line is that similar land cover features can appear very different on a satellite image, and there can be a good deal of confusion between land cover classes that do not have drastically different reflectance signatures, such as different types of forests. This concept is discussed more in the section below titled "Landsat TM imagery is suitable to reliably and accurately classify natural vegetation genus and species information." Myth and Misconception #3: Spectral resolution is the number of bands (image layers) available from a particular sensor. Spectral resolution refers to the wavelength range that is measured by a particular image channel (Figure 4). For example, channel 1 of the ETM+ sensor on board the Landsat 7 satellite detects wavelengths from 0.45µm - 0.52µm. The pan band on the ETM+ sensor, detects wavelengths from 0.50µm - 0.90µm. Therefore, one could say that on the ETM+ sensor channel 1 has a finer spectral resolution than does the pan band. Although sensors with a lot of image bands (these are called hyperspectral sensors) usually have a fine spectral resolution, the number of channels on a sensor is not necessarily linked to spectral resolution. To see how band widths for different sensors compare go to the spectral curve interactive tool. Myth and Misconception #4: Measuring the differences between two land cover maps created from imagery acquired at different times will provide the most reliable estimate of land cover change. A common way to calculate land cover change is to compare the differences between two land cover maps that have been created with remotely sensed images from different dates. This is often called post-classification change detection. Although this method seems logical and it is commonly used, it is rarely the most appropriate method for determining land cover over time. The problem is that there are errors associated with each of the two land cover maps, and when these are overlaid, the errors are cumulative. As a result, the error of the land cover change map is significantly worse than either of the land cover maps. This concept is presented in the land cover change methods guide, which includes examples of more accurate ways to determine changes in land cover over time. One way to illustrate these errors is to have an image analyst classify an image and then a few days later have that same analyst classify the same image (Figure 5). Even a skilled analyst will produce different results. If you overlay these two results you would see perceived areas of change even though the same image was used for both dates. In other words, overlaying the two images produces a map illustrating the errors associated with the post-classification approach. Using images from the same area that were acquired at different dates or with a different sensor would likely increase the differences in the land cover classification even more. Myth and Misconception #5: Landsat TM imagery is suitable to consistently classify natural vegetation genus and species information with a high level of accuracy. This is a frequently debated topic in the remote sensing community, but the fact is that the more detailed you want your thematic classification to be, the less accurate will your results be for individual classes. In other words, if you create a land cover map with just forest and non-forest classes, the accuracy of each of those classes will be higher than if you try to break the forest and non-forest classes into several other classes. This is an important concept to grasp when you are developing a classification system for a land cover map. Ross Nelson, from the NASA Goddard Space Flight Center, has developed a rule of thumb based on careful examination of the accuracy of several studies published in the remote sensing literature. Of course exceptions can be found, but these are very useful guidelines. The underlying concept is that the more precise the class definitions are the lower the accuracy will be for the individual classes. Classification Accuracy: Forest/non-forest, water/no water, soil/vegetated: accuracies in the high 90%'s Conifer/hardwood: 80-90% Genus: 60-70% Species: 40-60% Note: If including a Digital Elevation Model (DEM) in the classification, add 10% Myth and Misconception #6: A satellite with a 30 meter resolution will allow you to identify 30m features. In remote sensing, spatial resolution is often defined as the size of an image pixel as it is projected on the ground. For example, Landsat ETM+ imagery is often processed to have a 30 meter pixel size. This means that a Landsat ETM+ pixel represents a square area of 30m x 30m on the ground. Spatial resolution is only one factor that allows us to identify features on an image. Other factors must be considered in order to understand which features can be identified on imagery of a particular resolution. These include: Contrast between adjacent features Heterogeneity of the landscape Level of information The contrast between a feature we would like to detect and the background cover type greatly influences the size of a feature we can detect. For example, if there is a 2-meter wide trail that is cut through a dense forest, there is a good chance that we would be able to detect that with a 30-meter pixel because of the stark contrast between the bright soil and the dark forest (Figure 6). On the other hand if there was a 1-hectare patch that was covered with small trees and shrubs it might look too similar to the background forest for us to differentiate it from the more mature forest. The heterogeneity of the landscape also can influence how well we can detect a feature. If the landscape is very heterogeneous due to terrain or land cover variations, it is more difficult to detect a feature than if the background is very homogeneous (Figure 7). Another issue related to the ability to resolve features on a landscape is the level of information that we can interpret in a given situation. There are different levels of information that we can gather about features on an image: detection, identification, and contextual information. At the most basic level, we are able to detect that there is a feature present that is different from its surrounding environment. The next level is to be able to identify that object. The final level is to be able to derive contextual information about a feature. These different levels of information can be illustrated in an example. If we see a few bright pixels in an image that is largely forested, we can say that we have detected something in the forest that doesn't appear to be the same as the surrounding forest. If those bright pixels are connected in a line, we might come to the conclusion that this is a road. If we see that the bright line through the forest runs from one town to another, we could conclude that this is a main road between these two towns. As the level of information we want to extract from an image increases, so does the number of pixels necessary to reach that level of information. In our example, we could have detected that something was different from the forest by seeing just a few pixels (i.e., less than 10 pixels). To come to the conclusion that it was a road we would need more pixels (i.e., 20-40 pixels). To understand the context of the road we would need hundreds of pixels. In general, therefore, the more information one wants to glean from an image, the greater the number of pixels are necessary to reach that level. The degree of the increase varies depending on the feature being monitored and the type of imagery used. Myth and Misconception #7: Using ortho-referenced images is always a better choice than using non-ortho-rectified images. There are three common geometric corrections that are applied to satellite images in an attempt to correct for geometric errors inherent in remotely sensed imagery. The simplest are systematic corrections in which distortions caused by the sensor and by the movement of the satellite are removed. An intermediate process is often called geo-referencing. This involves taking a systematically corrected image and then improving the geometric quality using ground control points (known locations that can be matched between the image and the ground) to increase the accuracy of the image's coordinate system. The most advanced process is called ortho-rectification, which corrects for distortions caused by terrain relief in addition to the systematic corrections. In an image with level terrain, a geo-corrected image is effectively the same as an ortho-corrected image. An ortho-rectified image is effectively an image map and can be used as a map base. In order to create an ortho-rectified image, it is necessary to use a digital elevation model (DEM) so the elevation of each pixel can be determined. For many regions of the world, DEMs with sufficient detail to ortho-rectify moderate- to high- resolution satellite imagery are not available to the general public. As a mapping product, an ortho-rectified image is superior to an image that has not been ortho-rectified because it has the best absolute accuracy. A problem can occur, however, when one wants to compare an ortho-rectified image to a non-ortho-rectified image. When comparing images, such as one would do when determining changes in land cover over time, the relative positional accuracy between images (how well the images are aligned) is more important than the absolute accuracy (how well the image matches a map base). Trying to match a geo-corrected or systematically corrected image to an ortho-corrected one can be a very difficult and often impractical task because of the complex distortions in an ortho-corrected image. When trying to achieve high relative accuracy between two or more satellite images it is often easiest to do so by geo-referencing systematically corrected images using one of the images as the reference image. In other words, if you had three systematically corrected images to superimpose, you would select one of them to be the reference and then, using standard remote sensing software, you would geo-reference the other two images to the reference image. Depending on the accuracy of ortho-corrected images, it may even be impractical to try and align two or more ortho-corrected images. If different methods were used to create the ortho-corrected, the distortions between the two images could be quite different, making the alignment process very difficult. The best scenario is to have all of the images ortho-corrected using the same process so that they have high absolute and relative accuracy, but that is often not practical when working with limited resources. The bottom line is that ortho-corrected images can have higher absolute accuracy, but when relative accuracy is needed it may be better to use only systematically corrected images rather than mix systematically-corrected images with ortho-corrected imagery. There is a tremendous archive of ortho-rectified Landsat TM images available for free from the University of Maryland's Global Land Cover Facility (GLCF), but care must be taken when using these data with other imagery. Myth and Misconception #8: Square pixels in an image accurately represent a square area on the Earth's surface. When we zoom in on a satellite image on the computer, we can clearly see square pixels, and it is easy to think that this is an accurate representation of what the sensor sees as the image is recorded. This isn't the case, however, because the sensor detecting energy for an individual pixel actually views a circle or ellipse on the ground. In addition to that, it is interesting to note that 50% or more of the information for a given pixel contains recorded energy from the surface area surrounding an individual pixel (Figure 8). This tends to contribute to fuzzy borders between objects rather than the crisp border you might expect. To visualize this effect, picture using the beam from a flashlight to represent what is seen by an individual sensor element (the part of a sensor that records the energy for an individual pixel). As the beam leaves the flashlight, it spreads out so that by the time it hits a target (a wall, for instance) the end of the beam is larger than the reflector inside the flashlight. If we tried to illuminate a square picture hanging on a wall, much of the light from the flashlight would actually fall outside of the picture. If the flashlight were recording light energy (rather than emitting it), it would record energy from the picture and the wall. It is also interesting to note that the beam is brighter toward the center. Again, if we think of the flashlight as a sensor, it would be recording more energy toward the center of the field of view than toward the edges. This concept is illustrated in the What a sensor sees interactive tool. The blurring effect of recording energy from adjacent pixels is amplified if the atmosphere is hazy over the feature being sensed. This blurring is due to particulates in the air that force the photons of energy (the light) to be bounced around instead of traveling in a straight line. The bouncing of electrons causes the sensor to record energy from an object that is not in the line of sight of an individual sensor element. This is the same effect that causes objects to appear a bit fuzzy when you look at them on a hazy or foggy day. ********************************************************************************************************************************************** Source with images, http://biodiversityinformatics.amnh.org/content.php?content_id=1211 point
-
Just found this page about all the upcoming releases from ESRI in 2014. It seems ESRI finally realized the heat of all the growing power in the geospatial market. It is really hard to believe that a market leading software doesn't even support more than one core of a pc ! Check these out, What is ArcGIS Pro? ArcGIS Pro is a new application that will be released as part of ArcGIS for Desktop at version 10.3 (which is planned for later half of 2014 ). It is designed to be the premier application for visualizing, editing, and performing analysis using local content, or content from your ArcGIS Online or Portal for ArcGIS organization. Using ArcGIS Pro, you can author content in both 2D and 3D and publish it as feature, map, and analysis services, 3D Web Scenes, and Web Maps. It is a 64-bit, multi-threaded application with a modern user experience that runs on the Windows platform. Will ArcGIS Pro replace ArcGIS for Desktop (ArcMap)? No. ArcGIS Pro can run side by side with your existing ArcMap application. At its initial release, ArcGIS Pro will not have some of the functionality in ArcMap. However, it will have capabilities not available in ArcMap such as: project based workflows combined 3D/2D visualization 64-bit support multiple layout support and more Will I be able to use ArcGIS Pro with earlier versions of ArcGIS for Desktop? Yes, even though ArcGIS Pro will be released with ArcGIS 10.3 for Desktop, customers current on maintenance who are using earlier versions of Desktop will be able to download it and use it. Can I migrate documents from ArcGIS for Desktop into ArcGIS Pro? Yes. Map Documents (.mxd), Scenes (.sxd) and Globes (.3DD) can be imported into ArcGIS Pro. Once in ArcGIS Pro these can be saved as Projects (.aprx). Projects are not backward compatible; however, the data used by the application can be accessed by either ArcMap or ArcGIS Pro through the geodatabase so there can be collaboration at a data level. Services published using ArcGIS Pro can be used and shared with ArcMap. Also, ArcMap and ArcGIS Pro will run side by side, on the same machine, so it will be possible to use both applications for accessing and working with local data and online services. Can I get ArcGIS Pro if I don't have a license for ArcGIS for Desktop? No. ArcGIS Pro is part of ArcGIS for Desktop, and only customers who are entitled (current on maintenance) to ArcGIS for Desktop will be able to receive ArcGIS Pro. Will ArcGIS Pro have multiple license levels (i.e., Basic, Standard, and Advanced)? Yes. ArcGIS Pro will come in three versions that correspond to the ArcGIS for Desktop license levels: Basic, Standard, and Advanced. More functionality is included as we move from Basic to Standard to Advanced. Can I use my licensed ArcGIS for Desktop extensions with ArcGIS Pro? Yes. With the initial release of ArcGIS Pro, the following extensions can be used with the ArcGIS Pro as long as they are licensed for ArcGIS for Desktop: 3D Analyst, Spatial Analyst, Network Analyst, Workflow Manager, and Data Reviewer. Can I use ArcGIS Pro on my smartphone or tablet? No. ArcGIS Pro will only run on the desktop (Windows). What are the system requirements for ArcGIS Pro? ArcGIS Pro requires Windows 7 or Windows 8 on a 64-bit machine with a recommended 8 GB of RAM or more. More specific information on system requirements is still coming. Note: ArcGIS Pro will not be supported for Windows XP or Vista. Will ArcGIS for Desktop be supported natively on a Mac? Currently, Esri has no plans to release ArcGIS for Desktop on Mac OS. However, a new application called Explorer for ArcGIS will be released in early Q2, 2014. In its initial release, it will support iOS 7 and be available for download from Apple's App store. In a future release we also plan on bringing this native app to the Android and Mac platforms. The initial release of the app will allow you to discover, use, and share maps in ArcGIS Online and Portal for ArcGIS. A future release will also have authoring capabilities. In addition, developers can build custom apps for this platform using our ArcGIS Runtime SDK for OS X. For additional Mac and iOS support options visit the Esri EdCommunity website. Will ArcGIS for Desktop support 64-bit? At ArcGIS for Desktop 10.1 SP1, Esri released an optional 64-bit background geoprocessing module that executed tasks in 64-bit software. The 10.3 release of ArcGIS will feature ArcGIS Pro, a native 64-bit application that is included with ArcGIS for Desktop.1 point
-
Urban Network Analysis: A Toolbox for ArcGIS 10 The City Form Lab has released a state-of-the-art toolbox for urban network analysis. As the first of its kind, this ArcGIS toolbox can be used to compute five types of graph analysis measures on spatial networks: Reach; Gravity; Betweenness; Closeness; and Straightness. Redundancy Tools that come with the software, additionally calculate the Redundancy Index, Redundant Paths, and the Wayfinding Index. DOWNLOAD : version 1.01 https://bitbucket.org/cityformlab/urban-network-analysis-toolbox/downloads/Urban%20Network%20Analysis%20Toolbox%201.01.zip1 point
-
nah untuk overlay data dengan resolusi berbeda, nanti kemungkinan terdapat dua hasil, pertama software sendiri nanti akan menghasilkan error, kedua data tersebut harus disamakan resolusi pikselnya. jika tetap ingin dilakukan proses overlay, maka data harus mengikuti resolusi yang paling rendah, misalnya data hasil survey yang notabene lebih detail harus digeneralisasi menjadi lebih tidak detail untuk disamakan dengan data srtm IMHO1 point
-
Dear sargis, Open a new drawing and run the command. After saying " dwg files has merged together " press z {enter} e {enter} and now you can see your drawing should merged.1 point
-
Oil palm tree counting is a quite simple application. You can simply use following methods: 1. Ravine extraction (Honda Kyioshi): Based on the fact that the top canopy is the brightest point. The ravine (or the valley between 2 trees) looks darker. Doing it, you wil be able to locate the central point of each tree. Then counting is some simple works to do aftewards. 2. Local threshold texture matching: Requires some programming tasks. Get the individual trees as samplings (with different sizes of canopy) and then match them over the whole image. The matching algorithm can be image correlation or histograming matching. I already did it with VB programming and it works perfectly. By doing this, you can also be able to map individual trees. All of this is based on RGB image. For better results, mask the plantation areas before running the procedure. Of course there are several other options to do. If you want to operate the procedure in an automatic way, the programming skill is needed.1 point
-
1 point
-
The French government has agreed to open its Spot optical Earth observation data archive and distribute, free of charge to noncommercial users, Spot satellite data that is at least five years old. The Jan. 23 announcement by the French space agency, CNES, followed a French government commitment made Jan. 17 during a meeting in Geneva of the 80 governments that comprise the Group on Earth Observations (GEO). imagespot France to make older Spot images available to researchers for freeCNES said its decision was made in concert with Airbus Defence and Space, formerly named Astrium Services, which since 2008 has been the majority shareholder in the company that commercializes Spot data. CNES said the move to open up access to Spot imagery, which dates from 1986, “is the first major contribution from the private sector to the construction of the Global Earth Observation System of Systems (GEOSS).” CNES has already begun processing, at its own charge, a first tranche of 100,000 images that will be available later this year. source : spacenews via http://mundogeo.com/en/blog/2014/01/24/france-to-make-older-spot-images-available-to-researchers-for-free/1 point