jimboston wrote:The fact that many credible scientists dispute global warming... the fact that global warming proponents have lied and falsified data... these two things are enough to say that "global warming" can no be proved.
Just a quick comment on this. I presume you're referring specifically to the East Anglia University emails that a hacker got hold of and released to the public. Although it made quite a news splash given that there were a handful of quotes that seemed pretty damning (at least as present by the news media), scientific review committees later cleared the researches of falsifying data or doing anything improper with the data sets.
There are standard practices in the scientific community that are completely valid but might sound bad to those outside the community when details are given out of context. As an example, consider the two charts at the end of this post. Both show the US Federal outlays as a percentage of GDP. The only difference is the scaling of the vertical axis. Neither chart is wrong, though the first does make it more difficult to identify a trend.
Another aspect that was much more relevant to the "Climategate" relates to the interpretation and scaling of your data set. Satellites do not measure actual temperatures. I don't know enough about the specifics, though I suspect they record either atmospheric reflectivity or the intensity of certain infrared wavelengths. Yet even these measurements are recorded as relative intensity levels on a photodetector. The sensors have to be properly calibrated to yield any useful data. A simpler example would be a
thermistor which are widely used as temperature sensors. But they don't provide a direct measurement of temperature. Instead, the resistance across the device changes as temperature changes. Over a small range of temperatures, the relationship is linear, so for applications that don't need high precision, you can pretty much just verify resistance at a high and a low temperature, and you'll have a pretty good idea what the actual temperature is given some resistance value between (or even somewhat outside) those two values. But suppose you need a very precise measurement of temperature over a pretty large range. Real thermistors have an intrinsic nonlinearity, and that nonlinearity is not necessarily the same for different types of thermistors. So the best way to calibrate would be to very carefully measure resistance values for a large number of precisely known temperatures and build a lookup table for future use. But taking this back to the satellite situation, we don't have another way to accurately record temperatures of various atmospheric layers to calibrate the satellites. That's what we want the satellites for, and if we already knew the temperatures, we wouldn't need the satellites to begin with! So we have to model the data, make use of a careful understanding of the physics of our detectors, and build up a case for a proper instrument calibration. Then once we start recording data, we compare it to expectations, general understanding of the particular situation, and the level of repeatability (how well it compares with what other people are measuring). If our values are radically different in any of those respects, we may have to go back and double-check our models for the instrument behavior, and in some cases we may have to change the calibration settings if we come across an error. This is especially true if our instruments combined with our existing calibration model are reporting something that is demonstrably wrong.
I don't recall the exact situation with the East Anglia researchers (I don't think it was satellite data, but I don't remember for sure). But again, they have been exonerated by review boards. I wouldn't be surprised that if you were to dig through my emails from the past ten years, you'd probably find a few sentences that, when taken out of context, might sound like I was intentionally misrepresenting some data set, when nothing could be further from the truth.
