Jump to content


Inactive Members
  • Content Count

  • Joined

  • Last visited

Community Reputation

12 Good

About mamadouba

  • Rank

Profile Information

  • Gender
    Not Telling
  • Location
    Northern Hemisphere

Recent Profile Visitors

739 profile views
  1. What is your imagery source?
  2. The .MTL.txt is a text file and the ARD .xml is an XML. You cannot just change the extension and expect an xml schema to be parsed like the text file. Harris has not developed a parser for this yet, but they will likely do it in the future just like reading in Sentinel-2 .xml. In the mean time, you can simply develop your code in IDL to parse the XML file and pull the metadata of interest. Those parameters can be used in an ENVITask workflow chain. I had to do this when the Collection-1 MTL changed from the original and broke the ENVI parser. The GeoTiFF itself will open like any other image.
  3. It could be the correct approach, but it's basic or applied research that will take trial and error through scientific process. There is no canned answer. There are plenty of studies that support NDVI, or other vegetation indices (both narrow and broadband), serving as a proxy for crop health, vigor, LAI, etc. Red edge has also shown promise in detecting the subtleties between a green, healthy plant and a relatively green, unhealthy plant. The common denominator among these studies are field data; field data that has been collected in a meaningful and statistically rigorous manner so you can model the relationship between a vegetation index and the phenomenon of interest, in this case blight. Have you collected ground truth data on plants with blight and plants without? Can this be discerned at the spatial resolution that you are collecting at (5 meter overhead)? If you find this out and can recommend optimal collection parameters and methods using UAS to detect blight, then I highly suggest that you publish your results.
  4. UAS or aerial thermal infrared calibration = field data collection. Period!
  5. Hi Georgeina, The data products that I was referring to in my post are only related to Landsat ordered through the ESPA interface. ESPA provides the raw digital number (DN) along with a variety of higher level scientific products such as surface reflectance (SR), spectral indices (e.g., NDVI), Fmask (cloud and shadow), etc. The spectral reflectance product has undergone radiance conversion and atmospheric correction using a physical based first principles radiative transfer model, namely 6S. LEDAPS is applied to Landsat 4,5, and 7 and a similar algorithm (SR8) is applied to Landsat 8. The purpose of ESPA is bypass all of the manual digital image processing that an analyst would otherwise have to undertake themselves (geometric, radiometric, atmospheric correction) so you can focus on the analysis. In other words, ESPA provides research ready products. With regard to your work, are you just conducting an analysis, or do you want to learn these techniques yourself? It is good to know the fundamentals behind image pre-processing and the best way to learn is to conduct the work yourself. Is Erdas your primary image processing software or do you use ENVI, or other?
  6. Georgeina, what software do you have access to? Do you have access to a USGS EarthExplorer account? You can order your products in bulk using the USGS ESPA interface (https://espa.cr.usgs.gov/login). You can place an order up to 5000 scenes for L4/5, L7, and L8. The available products include surface reflectance and several different indices (e.g., NDVI, EVI, SAVI, etc.), including a cloud/shadow mask for masking NULL values. https://landsat.usgs.gov/sites/default/files/documents/espa_odi_userguide.pdf
  7. Highly turbid water can cause positive NDVI values and saturate pixels from illumination effects. Run the following formula and see if you can extract the water pixels (assuming NDVI was calculated properly). water = fix(ndvi LT 0.01 AND toa5 LT 0.11) OR fix(NDVI LT 0.01 AND toa5 LT 0.05) LT = less than toa = Top of Atmosphere reflectance, number designation is band number. This is the water formula used in the Fmask algorithm by Zhe Zhu of Boston University. It performs very well, except where water is shallow or highly turbid. If this formula fails to discriminate the water pixels for the problem area, then your NDVI is probably correct and it's an image-based anomaly. Radiometric correction doesn't solve everything.
  8. Do you have access to Amazon Web Services? Upload data to an S3 bucket, spin up an EC2 instance, and use AWS Batch with the software you were intending.
  9. zabdi3l, jake stated that the imagery was sourced from the Earth Explorer Surface Reflectance Product; therefore, the data are already calibrated and atmospherically corrected to surface reflectance using LEDAPS (Landsat 4-7). There is no need to calibrate in ENVI, or any other software. However, the data do have a scale factor of 0.0001 which must be taken into consideration and applied if you want to convert data from 16-bit integer back to floating point..
  10. The surface reflectance products have a scale factor of 0.0001. For example, a value of 1000 is actually 0.10. Rescale the data first or rescale your formula coefficients, either way works.
  11. I couldn't agree more with rahmansunbeam, especially if you are applying image processing to civil source remote sensing data. Google Earth Engine provides every function incorporated in a standard remote sensing software package, plus a host of "shallow" machine learning algorithms for map classification, and petabytes of data in their catalog. The only resources that a user needs is a web browser and some knowledge of JavaScript, but their API documentation and examples are easy to follow. If you need to go further, and coding expertise and funding is available, you can fire up an EC2 instance on AWS. I run all of my deep learning and machine vision analysis using a multi-GPU EC2 instance and the pricing is nominal.
  12. If you are referring to the simple "dehaze" options in Erdas, then you shouldn't expect perfection. Depending on the source and particulate, some haze cannot be removed. If you are referring to atmospheric correction of Level 2A, then you should be using Sen2Cor. http://step.esa.int/main/third-party-plugins-2/sen2cor/
  13. The distinction here is Digital Surface Model (DSM) vs Digital Elevation Model (DEM). I think of the DEM as the bare surface elevation and the DSM as heights above that surface. Taking the difference between these two models gives a normalized DSM. You can create electro-optical (EO) point clouds from imagery, but you will not be able to do this with Sentinel or Landsat because only Nadir data are provided. EO point clouds require multiple collects at multiple look angles to apply photogrammetric techniques for deriving heights. Only agile commercial sensors like Worldview provide this capability, and UAS. Here's some information: http://www.harrisgeospatial.com/Home/NewsUpdates/TabId/170/ArtMID/735/ArticleID/14517/3D-Point-Cloud-Generation-from-Stereo-Imagery.aspx
  14. Are you interested in yield monitoring or biomass monitoring? These are two separate metrics in which the former refers to crop production and the latter refers to the entire vegetative structure. For the sake of this discussion, let's stay with biomass. 1. Absolute biomass: estimating this metric from remotely sensed data requires field data for modeling.....period. 2. Relative biomass: simply refers to an increase or decrease compared to the baseline. This can actually be conducted using NDVI because of the high positive correlation between NDVI and biomass, and yield, and LAI, and etc. Let's assume you do not have field data to establish how strong NDVI correlates with your particular area of interest, but there's enough literature to cite that will support using NDVI as a proxy or surrogate variable for biomass. I don't know what your monitoring period is, but you need to establish a robust baseline which cannot be captured within the Sentinel-2 record alone. You will need to develop a time series dataset composed of Landsat and Sentinel-2. The simple workflow is create a multitemporal NDVI time series dataset over several years to develop an average NDVI for the peak growing period (i.e. biomass metric) and compare current observations against. Your resulting statistic will be the difference from the baseline, or relative biomass.
  15. The FLAASH algorithm isn't the issue, it's the smoke. Smoke is an aerosol that has major attenuation effects at all optical wavelengths. There is no radiative transfer model out there, be it FLAASH, MODTRAN, 6S, etc. that will model at-surface reflectance if the sensor and specified wavelength cannot see the reflected surface. Furthermore, this is exacerbated by the narrow channels (lower reflected energy captured) on hyperspectral sensors.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.