We have a few space satellites that measure global temperature independently of ground stations.
My basic understanding is that they measure incoming (from earth) radiation level (irradiance).
Then with some maths, they derive the temperature from the received light spectrum.
Some satellites measure incoming infrared radiation, but the most popular ones measure incoming microwave radiation.
My main question is related to the use of thermometers on board of those satellites, their purposes.
Let's take for example the famous UAH satellite, its founder, Roy Spencer, says the following:
> Contrary to some reports, the satellite measurements are not calibrated in any way with the global surface-based thermometer records of temperature. They instead use their own on-board precision redundant platinum resistance thermometers (PRTs) calibrated to a laboratory reference standard before launch.
platinum thermometers are simply a more accurate version of thermometers versus mercury based ones.
He says the satellite has onboard (redundant (2)) thermometers. Why and at which step in the temperature calculus pipeline are those thermometers used in the satellite?
The satellites are in near space (very cold) and their face that is solar exposed could be very hot. In both ways (space cold and heat) seems highly unrelated to measuring earth surface temperature.
If so, then what are the purpose of those thermometers onboard? How do they relate with the spectroscopic measure of incoming radiation?
Here is my tentative understanding:
I was able to find this UAH paper that says:
> the spacecraft experience a slight E-W drift during the course of their operational life.
> the (microwave) instrument is exposed to more direct sunlight and warms as a result. This warming is evidenced in the on-board platinum resistance thermometer which is embedded in the hot-target plate. Christy et al. 2000 discovered the dependency of variations in the hot-target plate temperature (HTPT) and errors in the atmospheric temperature reported by the instrument. The instrument error was eliminated by determining a linear coefficient which when multiplied by the HTPT anomaly would account for much of the error.
from https://ams.confex.com/ams/pdfpapers/68728.pdf
The paper they cite is most likely:
https://journals.ametsoc.org/view/journals/atot/17/9/1520-0426_2000_017_1153_mttdca_2_0_co_2.xml?tab_body=abstract-display
> As mentioned in Christy et al. (1998), we discovered a spurious influence on the calculation of Tb due to the temperature of the instrument itself
According to this, it seems that the two thermometers, which are near the sun exposed part (plate) and the radiometer, are used to algorithmically correct the bias/drift effect of satellite temperature variation on the accuracy of the radiometer that measure microwaves.
Besides this correction, that was added in the 2000s, my question is, did the two themometers had, and still have another role in the earth temperature calculus? Do the radiometer needs them, independently of aforementioned drift.
> The determination of earth-viewed Tb from the ob-served digital counts is based on an interpolation scheme between two temperature anchor points: cold space and the onboard warm target plate T W . The MSU reports the intensity of microwave radiation as digital counts for the 11 earth views and for cold space and the warm target. The temperature for cold space is known (2.7 K) and that of the warm target is monitored by the two PRTs. Thus a relationship is then computed between counts and Tb given the digital counts and temperatures of the anchor points (Spencer et al. 1990).
If "earth viewed Tb" means earth calculated temperature (?) then it seems the radiometer alone cannot work (?) and needs to do the difference between the cold and hot zone of the satellite.
It seems to not be the same thing as aforementioned bias drift correction but the difference to me is unclear. The argument for this role of the thermometer being different is that it was described in 1990, before the drift was known (it seems but not sure).
note that they mention errors corrections can be non linear.
> In general, as the instrument heats (T W increases) the calculated earth-viewed temperature based on laboratory nonlinear calibration coefficients becomes too warm.
Meaning that the warmer the radiometer is, the warmer (non linearly) the earth calculated temperature is.
anyone knows what the "cold target counts" terminology actually means in regards to the radiometer?
note that the idea of using coorbiting satellites thermometer data for increased redundancy is promising.
Anyway here is my discovery, that is in fact known but it seems nobody had connected the dots before.
Christy discovered that the temperature variation of the radiometer affects the earth temperature calculus (non linearly).
They also measured that past similar satellites had anomalous drifts/temperature records because (among other things like orbital decay, instrument gain and diurnal bias) they were not correcting for their discovered bias.
HOWEVER, it seems no one has considered that there is a meta-drift, a drift over the drift since the 2 thermometers that are used to correct the radiometer bias, have themselves an intrinsic temperature drift over time that is an artificial warming effect.
It is known that thermometers (mercury, alcohol) artificially gain 0.7 degree per century.
for platinum thermometers, the temperature drift is between 2-10 millikelvin per year hence up to 1 degree of warming per century....
but in an environment like space (extreme cold, extreme heat when sun aligned, and lower gravity (could affect thermal expansion/accuracy) the effect could be bigger or faster especially with solar radiation.
source: https://ui.adsabs.harvard.edu/abs/2015SPIE.9258E..1AD/abstract
Of course, here even if the thermometers were to have their temperature artificially increase by 1 degree (or more via accellerated space aging), the effect would not directly translate to an artificial increase of earth calculated temperature by 1 degree.
According to Christy's paper, the average UAH satellite monthly temperature was 280 degree at the time. So it would shift to an artificial 281 monthly temperature.
It appears there was a drift of 2 degree between 1979 to 97 for a similar satellite (see figure 5) albeit data is not clean at the end, and also, it might be natural space temperature variance (or via orbital decay/change) (no idea) instead of accelerated thermometer aging.
For N-6 (assuming it is representative..) the monthly mean hot plate variance was of 4 degree hence a thermal drift of 1-2 degree could affect the monthly variance by 25 to 50% which is huge but unclear to me how much the algo is sensitive to the variance versus the absolute temperature. Also daily variance is probably much larger.
The actual effect on earth calculated temperature could be calculated from figure 6, but I am too much of a noob to do the maths sadly.. If anyone could answer how much an artificial gain in thermometer temperature of say 1 degree would do on earth temperature that would be great.
For reference, when the bias was not corrected they observed ~0.8 degree of earth calculated temperature variance bias.
Note that this could have two effects as according to my partial understanding, the thermometer is used two times, 1) for the radiometer to work (related to "counts") they need to know the difference between the cold (fixed temp) and hot plate. and 2) for correcting the non linear bias of radiometer temperature. 1 and 2 might be the same thing but IMO they are not.
Note that independently of thermometer drift and temperature effect on radiometer, the radiometer could itself have also an age drift independent of temperature cf paper aforementioned instrument gain.
While the magnitude of the thermometer drift on earth temperature is unknown (to me) but can be calculated, it could be significant and falsify the satellite temperature record to an extent, and it should worsen in the following decades (especially if aging is non linear also there will be considerably more cosmic rays radiations during the incoming maunder like minimum (maybe arround 2030) which is predicted to reduce (temporarilly) the sun magnetosphere by something like 40% IIRC.
Most importantly this bias can be corrected, via analyzing coorbiting satellites with the UAH (or RSS) that are more recent, e.g. if a spacex satellite came with a thermometer and is nearby, then it could be cross correlated and used to assess how much the UAH thermometers have drifted empirically.