Jump to content


Inactive Members
  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About anthonyw

  • Rank

Profile Information

  • Gender
  • Location
  1. Thanks for the response. I have calculated LST three ways: 1. Uncorrected at-satellite 2. Corrected temperature via Atmospheric Correction Parameter Calculator and radiative transfer equation 3. The Jimenez-Sobrino (2003) single-channel method. I have estimated emissivity using the NDVI Thresholds method as per Sobrino et al. 2004. For the Atmospheric Correction Parameter Calculator, I tried a number of different options to see how upwelling and downwelling radiance and atmospheric transmittance would be affected. I tried it with and without surface conditions, with and without interpolation, with summer and winter atmospheres, and with an altitude of 0 to 0.1 km (the weather station in my study area is on top of a tall building, at an elevation of about 300 ft.). Overall, these changes didn't have much impact on upwelling and downwelling radiance and atmospheric transmittance. I'm just a bit confused as to why uncorrected at-satellite temperatures are so much hotter than weather station temperatures-- especially in this area. I would expect ground surface to be a bit hotter than air temperature, but in a muggy area with lots of water vapor, I would expect the thermal radiance sensed by the satellite to be much lower than that measured on the ground, as water vapor between the ground and satellite should absorb radiance. I thought the "cooling" impact of water vapor would exceed the "warming" impact of LST vs air temperature. I know Band 6 is in the atmospheric window, but I still thought water vapor would be a big enough deal to absorb enough radiance to make uncorrected at-satellite temperatures appear cooler than weather station temperatures. I guess I was just wrong about that? One last question: When you have used Jimenez-Sobrino, how did you estimate total atmospheric water vapor content? Thanks, Anthony
  2. Update: I was just reviewing some literature and noticed similar results (Yang and Wang, 2002; www.ltrs.uri.edu/research/LST_page/paper4.doc‎). Their uncorrected at-satellite temperatures were also significantly higher than station-based readings. Their research was conducted very close to my study area. Am I fundamentally misunderstanding something about comparing satellite-estimated LST with station measurements? My recollection from literature is that multiple studies have done this and errors are usually rather small, and I don't recall reading that anything special need be done when comparing satellite estimated LST with station measured temperature. I know satellites are measuring the actual ground surface temperature based on its brightness temperature, and I know temperature stations are measuring air temperature and not ground surface temperature, but based on the literature I've reviewed I would have expected uncorrected temperatures to be a bit on the cool side based on atmospheric absorption between the land surface and satellite. Moreover, I would have expected uncorrected temperatures to be a bit closer than 7 degrees F! I don't know what the exact lapse rate in this area is, but I can't imagine it's high enough to account for this discrepancy!
  3. Update: I found another station in my study area. The at-satellite temperature there is also about 7 degrees higher than the instrumental LST. I also just tried a Landsat image from a different date. I'm seeing more or less the same problem: at-satellite temperature is about 7 degrees too high at both stations. I'm perplexed here, and I'm quickly running out of ideas. Anyone have any ideas? Also, this is where I read that image acquisition times are in GMT: http://landsat.usgs.gov/acquisition_time_for_an_image.php
  4. Hello All, I am trying to estimate land surface temperature (LST) from Landsat 5 data. The study area is on the East coast in the US, and luckily it includes a temperature station with hourly data on the day the satellite passed. I've converted DN5 to radiance and used an inversion of Planck's to calculate brightness temperature-- this is all pretty standard stuff. Here's my problem: The uncorrected at-satellite brightness temperature at the temperature station location is about 7 degrees F higher than the station measured at the temperature station. My general understanding is that the East coast is rather muggy, so I would suspect that atmospheric correction should cause LST to increase compared with the at-satellite brightness temperature, as there's probably a fair bit of moisture in the atmosphere absorbing radiance between the land surface and satellite. The fact that the uncorrected at-satellite temperature is higher than the measured temperature seems problematic. Indeed, using the Atmospheric Correction Parameter Calculator (http://atmcorr.gsfc.nasa.gov/) and a standard radiative transfer equation, my land surface radiance increases compared with the uncorrected at-satellite radiance, and the corrected temperature increases to nearly 12 degrees F above the temperature measured by the temperature station. I've checked and rechecked my radiance and Planck's inversion calculation multiple times. I'm confident these equations are fine. I've tried calculating radiance from both R/Grescale and LMIN/LAX equations, and I've even gone back to previous calibrations (e.g., Chander, 2003), but each of these approaches impacts temperature only slightly. Any ideas? I'm pretty much at a loss here. I would really appreciate any advice/suggestions. Thanks, Anthony ETA: Oh, I was wondering if this could be a time issue. My understanding is that Landsat image acquisition is listed as GMT. Is that correct?

Important Information

By using this site, you agree to our Terms of Use.