by Dr. Roy Spencer, August 6, 2019 in GlobalWarming
“Reading, we have a problem.”
As a followup to my post about whether July 2019 was the warmest July on record (globally-averaged), I’ve been comparing reanalysis datasets since 1979. It appears that the ERA5 reanalysis upon which WMO record temperature pronouncements are made might have a problem, with spurious warmth in recent years.
Here’s a comparison of the global-average surface air temperature variations from three reanalysis datasets: ERA5 (ECMWF), CFSv2 (NOAA/NCEP), and MERRA (NASA/GSFC). Note that only CFSv2 covers the full period, January 1979 to July 2019:
ERA5 has a substantially warmer trend than the other two. By differencing ERA5 with the other datasets we can see that there are some systematic changes that occur in ERA5, especially around 2009-2010, as well as after 1998:
by Anthony Watts, May 9, 2019 in ClimateChangeDispatch
That’s an indication of the personal bias of co-author Schmidt, who in the past has repeatedly maligned the UAH dataset and its authors because their findings didn’t agree with his own GISTEMP dataset.
In fact, Schmidt’s bias was so strong that when invited to appear on national television to discuss warming trends, in a fit of spite, he refused to appear at the same time as the co-author of the UAH dataset, Dr. Roy Spencer.
A breakdown of several climate datasets, appearing below in degrees centigrade per decade, indicates there are significant discrepancies in estimated climate trends:
- AIRS: +0.24 (from the 2019 Susskind et al. study)
- GISTEMP: +0.22
- ECMWF: +0.20
- RSS LT: +0.20
- Cowtan & Way: +0.19
- UAH LT: +0.18
- HadCRUT4: +0.17
Which climate dataset is the right one? Interestingly, the HadCRUT4 dataset, which is managed by a team in the United Kingdom, uses most of the same data GISTEMP uses from the National Oceanic and Atmospheric Administration’s Global Historical Climate Network.
by Dr Roy Spencer, April 23, 2019 in GlobalWarming
This post has two related parts. The first has to do with the recently published study of AIRS satellite-based surface skin temperature trends. The second is our response to a rather nasty Twitter comment maligning our UAH global temperature dataset that was a response to that study.
Furthermore, that period (January 2003 through December 2017) shows significant warming even in our UAH lower tropospheric temperature (LT) data, with a trend 0.01 warmer than the “gold standard” HadCRUT4 surface temperature dataset (all deg. C/decade):
Cowtan & Way: +0.19
UAH LT: +0.18
I’m pretty sure the Susskind et al. paper was meant to prop up Gavin Schmidt’s GISTEMP dataset, which generally shows greater warming trends than the HadCRUT4 dataset that the IPCC tends to favor more. It remains to be seen whether the AIRS skin temperature dataset, with its “clear sky bias”, will be accepted as a way to monitor global temperature trends into the future.
What Satellite Dataset Should We Believe?
by Ross McKitrick, March1, 2019 in WUWT
Ben Santer et al. have a new paper out in Nature Climate Change arguing that with 40 years of satellite data available they can detect the anthropogenic influence in the mid-troposphere at a 5-sigma level of confidence. This, they point out, is the “gold standard” of proof in particle physics, even invoking for comparison the Higgs boson discovery in their Supplementary information.
The fact that in my example the t-statistic on anthro falls to a low level does not “prove” that anthropogenic forcing has no effect on tropospheric temperatures. It does show that in the framework of my model the effects are not statistically significant. If you think the model is correctly-specified and the data set is appropriate you will have reason to accept the result, at least provisionally. If you have reason to doubt the correctness of the specification then you are not obliged to accept the result.
This is the nature of evidence from statistical modeling: it is contingent on the specification and assumptions. In my view the second regression is a more valid specification than the first one, so faced with a choice between the two, the second set of results is more valid. But there may be other, more valid specifications that yield different results.
In the same way, since I have reason to doubt the validity of the Santer et al. model I don’t accept their conclusions. They haven’t shown what they say they showed. In particular they have not identified a unique anthropogenic fingerprint, or provided a credible control for natural variability over the sample period. Nor have they justified the use of Gaussian p-values. Their claim to have attained a “gold standard” of proof are unwarranted, in part because statistical modeling can never do that, and in part because of the specific problems in their model.
by JC Maurin, 22 février 2019 in ScienceCimatEnergie
Afin d’élaborer les indicateurs de température, on utilise des radiomètres MSU, AMSU ou ATMS embarqués sur des satellites, puis on construit l’indicateur à partir des mesures et de diverses corrections. On obtient ainsi un indicateur qui concerne la quasi-totalité du globe, contrairement aux indicateurs terrestres basés essentiellement (avant 1980) sur quelques milliers de stations américaines et européennes. Au sujet des mesures par satellites, et sans être spécialiste dans ce domaine, un physicien peut néanmoins donner quelques éléments d’appréciation qu’ignore parfois un lecteur intéressé par la climatologie. Le but de la seconde partie de l’article sera atteint si ce lecteur a appris des éléments nouveaux, il pourra ensuite approfondir la question par lui-même.
by J.C. Maurin, 8 février 2019, in ScienceClimatEnergie
A partir des notions intuitives de chaleur et température, les physiciens (Carnot, Thomson, Clausius, Maxwell, Boltzmann) arrivèrent progressivement à la notion scientifique de température thermodynamique. La Conférence Générale des Poids et Mesures adopta en 1927 l’échelle thermodynamique proposée en 1911, puis l’unité kelvin en 1954.
La notion de température thermodynamique nécessite que l’équilibre thermique soit atteint, ce qui n’est pas le cas dans l’atmosphère de la Terre. Il n’existe pas une « température thermodynamique de l’atmosphère ». A défaut, on utilise une « moyenne des températures » mesurées en divers points de l’atmosphère. Mais la température thermodynamique étant une grandeur intensive, une moyenne, quelle que soit son élaboration, ne peut jouer qu’un rôle d’indicateur. L’usage est néanmoins d’utiliser le kelvin pour les indicateurs. On exprimera de préférence les variations des indicateurs sous forme relative. L’indicateur va être dépendant de l’échantillonnage (spatial et temporel) des mesures et surtout de son mode d’élaboration.
by Bob Tisdale, December 8, 2018 in WUWT
In this post, we’re going to present monthly TMIN and TMAX Near-Land Surface Air Temperature data for the Northern and Southern Hemispheres (not in anomaly form) in an effort to add a little perspective to global warming. And at the end of this post, I’m asking for your assistance in preparing a post especially for you, the visitors to this wonderful blog WattsUpWithThat.
INTRODUCTION FOR THE “GLOBAL WARMING IN PERSPECTIVE” SERIES
A small group of international unelected bureaucrats who serve the United Nations now wants to limit the rise of global land+ocean surface temperatures to no more 1.5 deg C from pre-industrial times…even though we’ve already seen about 1.0 deg C of global warming since then. So we’re going to put that 1.0 deg C change in global surface temperatures in perspective by examining the ranges of surface temperatures “we’ve been used to” on our lovely shared home Earth.
The source of the quote in the title of this post is Gavin Schmidt, who is the Director of the NASA GISS (Goddard Institute of Space Studies). It is from a 2014 post at the blog RealClimate, and, specifically, that quote comes from the post Absolute temperatures and relative anomalies (Archived here.). The topic of discussion for that post at RealClimate was the wide span of absolute global mean temperatures [GMT, in the following quote] found in climate models. Gavin wrote (my boldface):
by M.D., 3 décembre 2018 in MythesManciesMathématiques
Que savait-on en décembre 2015, que sait-on en décembre 2018 ?
Températures globales depuis 1979 selon trois sources (1979 est l’année origine des relevés par satellites).
by Mark Fife, November 30, 2018 in WUWT
We have looked at quality, long term records from three different regions. Two of these are on opposite sides of the North Atlantic, one is in the South Pacific. The two regions bordered by the North Atlantic are similar, but not identical. The record from Australia is only similar in that temperature has varied over time and has warmed in the recent past.
In all three regions there is no evidence of any strong correlation to CO2. There is ample evidence to support a conjecture of little to no influence.
There is ample evidence, widely shown in other studies, of localized influence due to development and population growth. The CET record has a correlation of temperature to CO2 of 0.54, which is the highest correlation of any individual record in this study. This area is also the most highly developed. While this does not constitute proof, it does tend to support the supposition the weak CO2 signal is enhanced by a coincidence between rising CO2 and rising development and population.
The efficacy of combining US records with those records from Greenland, Iceland, and the UK may be subject to opinion. However, there is little doubt combining records from Australia would create an extremely misleading record. Like averaging a sine curve and a cosine curve.
It appears the GISS data set does a poor job of estimating the history of temperature in all three regions. It shows a near perfect correlation to CO2 levels which is simply not reflected in any of the individual or regional records. There are probably numerous reasons for this. I would conjecture the reasons would include the influence of short-term temperature record bias, development and population growth bias, and data estimation bias. However, a major source of error could be attributed to the simple mistake of averaging regions where the records simply are too dissimilar for an average to yield useful information.
by J.R. Christy et al., March 8, 2018 in InternJournRemoteSensing
The Intergovernmental Panel on Climate Change Assessment Report 5 (IPCC AR5, 2013) discussed bulk atmospheric temperatures as indicators of climate variability and change. We examine four satellite datasets producing bulk tropospheric temperatures, based on microwave sounding units (MSUs), all updated since IPCC AR5. All datasets produce high correlations of anomalies versus independent observations from radiosondes (balloons), but differ somewhat in the metric of most interest, the linear trend beginning in 1979. The trend is an indicator of the response of the climate system to rising greenhouse gas concentrations and other forcings, and so is critical to understanding the climate. The satellite results indicate a range of near-global (+0.07 to +0.13°C decade−1) and tropical (+0.08 to +0.17°C decade−1) trends (1979–2016), and suggestions are presented to account for these differences. We show evidence that MSUs on National Oceanic and Atmospheric Administration’s satellites (NOAA-12 and −14, 1990–2001+) contain spurious warming, especially noticeable in three of the four satellite datasets.
Comparisons with radiosonde datasets independently adjusted for inhomogeneities and Reanalyses suggest the actual tropical (20°S-20°N) trend is +0.10 ± 0.03°C decade−1. This tropical result is over a factor of two less than the trend projected from the average of the IPCC climate model simulations for this same period (+0.27°C decade−1).
by Renee Hannon, October 29, 2018 in WUWT
This post is a coarse screening assessment of HadCRUT4 global temperature anomalies to determine the impact, if any, of data quality and data coverage. There has been much discussion on WUWT about the quality of the Hadley temperature anomaly dataset since McLean’s Audit of the HadCRUT4Global Temperature publication which is paywalled. I purchased a copy to see what all the hub-bub was about, and it is well worth the $8 in my view. Anthony Watts’ review of McLean’s findings and executive summary can be found here.
A key chart for critical study is McLean’s Figure 4.11 in his report. McLean suggests that HadCRUT4 data prior to 1950 is unreliable due to inadequate global coverage and high month-to-month temperature variability. For this post, I subdivided McLean’s findings into three groups shown with added shading: Good data which covers the years post-1950. During this period global data coverage is excellent at greater than 75% and month-to-month temperature variation is low. Questionable data occurs from 1880 to 1950. During this period global data coverage ranged from 40% to 70% with higher monthly temperature variations. Poor data is pre-1880 when global coverage ranged from 14 to 25% with extreme monthly temperature variations.
by K. Richard, October 8, 2018 in NoTricksZone
A new paper documents “remarkably different” land temperatures from one instrumental data set to another. In some regions there is as much as an 0.8°C conflict in recorded temperature anomalies for CRU, NASA, BEST, and NOAA. The relative temperature trend differences can reach 90% when comparing instrumental records. Consequently, the uncertainty in instrumental temperature trends — “0.097–0.305°C per decade for recent decades (i.e., 1981–2017)” — is as large or larger than the alleged overall warming trend itself for this period.
by David Middleton, October 4, 2018 in WUWT
95% of the model runs predicted more warming than the RSS data since 1988… And this is the Mears-ized RSS data, the one in which the measurements were influenced to obtain key information (erase the pause and more closely match the surface data).
Their “small discrepancy” would be abject failure in the oil & gas industry.
The observed warming has been less than that expected in a strong mitigation scenario (RCP4.5).
Output of 38 RCP4.5 models vs observations. The graph is originally from Carbon Brief. I updated it with HadCRUT4, shifted to 1970-2000 baseline, to demonstrate the post-El Niño divergence.
by Anthony Watts, September 28, 2018 in WUWT
These results come from the SABER instrument onboard NASA’s TIMED satellite. SABER monitors infrared emissions from carbon dioxide (CO2) and nitric oxide (NO), two substances that play a key role in the energy balance of air 100 to 300 kilometers above our planet’s surface. By measuring the infrared glow of these molecules, SABER can assess the thermal state of gas at the very top of the atmosphere–a layer researchers call “the thermosphere.”
When the thermosphere cools, it shrinks, literally decreasing the radius of Earth’s atmosphere. This shrinkage decreases aerodynamic drag on satellites in low-Earth orbit, extending their lifetimes. That’s the good news. The bad news is, it also delays the natural decay of space junk, resulting in a more cluttered environment around Earth.