by University of Utah, March 8, 2020 in WUWT
CO2 measurements from OCO-2 in parts per million over Las Vegas on Feb. 8, 2018. Credit: Dien Wu/University of Utah
A new NASA/university study of carbon dioxide emissions for 20 major cities around the world provides the first direct, satellite-based evidence that as a city’s population density increases, the carbon dioxide it emits per person declines, with some notable exceptions. The study also demonstrates how satellite measurements of this powerful greenhouse gas can give fast-growing cities new tools to track carbon dioxide emissions and assess the impact of policy changes and infrastructure improvements on their energy efficiency.
Cities account for more than 70% of global carbon dioxide emissions associated with energy production, and rapid, ongoing urbanization is increasing their number and size. But some densely populated cities emit more carbon dioxide per capita than others.
To better understand why, atmospheric scientists Dien Wu and John Lin of the University of Utah in Salt Lake City teamed with colleagues at NASA’s Goddard Space Flight Center in Greenbelt, Maryland and the University of Michigan in Ann Arbor. They calculated per capita carbon dioxide emissions for 20 urban areas on several continents using recently available carbon dioxide estimates from NASA’s Orbiting Carbon Observatory-2 (OCO-2) satellite, managed by the agency’s Jet Propulsion Laboratory in Pasadena, California. Cities spanning a range of population densities were selected based on the quality and quantity of OCO-2 data available for them. Cities with minimal vegetation were preferred because plants can absorb and emit carbon dioxide, complicating the interpretation of the measurements. Two U.S. cities were included–Las Vegas and Phoenix.
Continuer la lecture de NASA satellite offers urban carbon dioxide insights
by Jean N., 3 janvier 2020 in ScienceClimatEnergie
• Depuis plus de 40 ans que des mesures de température sont effectuées par satellite, la basse et la moyenne troposphère se réchauffent bel et bien. Mais sans aucune accélération visible, et ce à des vitesses de l’ordre de +0.13°C/décade et +0.09°C/décade. La vitesse de réchauffement décroit donc avec l’altitude.
• La zone correspondant à la tropopause (vers 10 km d’altitude) ne se réchauffe pas, contrairement à ce que les modèles informatiques prédisent (pour plus de détails voir les articles de J. Christy). Peut-on alors continuer à utiliser ces modèles pour prédire la température de certaines couches atmosphériques dans le futur? Notons ici que les observations satellitaires sont confirmées par des observations réalisées in situ avec des ballons-sondes.
• La basse stratosphère se refroidit actuellement à la vitesse d’environ –0.29°C par décade, et l’analyse de corrélation menée par Varotsos et Efstathiou nous suggère que le comportement de la stratosphère n’est pas simplement lié à celui de la troposphère, les choses étant plus complexes.
• Les modèles climatiques actuels, basés sur l’hypothèse d’un l’effet de serre radiatif causé essentiellement par du CO2 atmosphérique, sont donc à revoir. Le CO2 (naturel ou d’origine anthropique) pourrait donc n’avoir qu’un rôle mineur et imperceptible sur la température de la troposphère.
En conclusion générale, nous devons toujours garder à l’esprit que le système climatique est très complexe car il est composé de cinq sous-systèmes (atmosphère, cryosphère, hydrosphère, biosphère et lithosphère) et que ces 5 sous-systèmes interagissent les uns avec les autres dans l’espace et le temps avec des processus principalement non linéaires (Lovejoy et Varotsos, 2016) et se comportent de manière chaotique (voir ici). Par conséquent, la modification d’un seul paramètre dans l’un des sous-systèmes (par exemple, la température de la basse troposphère) ne permet pas de prévoir un changement climatique à long terme, car tous les autres paramètres de l’atmosphère mais aussi ceux des autres sous-systèmes (connus et mesurables, ou non) ne sont pas nécessairement connus et stables. En plus de tout cela plusieurs facteurs externes, imparfaitement connus, peuvent influencer chacun des sous-systèmes, comme les rayonnements cosmiques ou les variations du champ magnétique solaire.
by Dr. Roy Spencer, August 6, 2019 in GlobalWarming
“Reading, we have a problem.”
As a followup to my post about whether July 2019 was the warmest July on record (globally-averaged), I’ve been comparing reanalysis datasets since 1979. It appears that the ERA5 reanalysis upon which WMO record temperature pronouncements are made might have a problem, with spurious warmth in recent years.
Here’s a comparison of the global-average surface air temperature variations from three reanalysis datasets: ERA5 (ECMWF), CFSv2 (NOAA/NCEP), and MERRA (NASA/GSFC). Note that only CFSv2 covers the full period, January 1979 to July 2019:
ERA5 has a substantially warmer trend than the other two. By differencing ERA5 with the other datasets we can see that there are some systematic changes that occur in ERA5, especially around 2009-2010, as well as after 1998:
by Anthony Watts, May 9, 2019 in ClimateChangeDispatch
That’s an indication of the personal bias of co-author Schmidt, who in the past has repeatedly maligned the UAH dataset and its authors because their findings didn’t agree with his own GISTEMP dataset.
In fact, Schmidt’s bias was so strong that when invited to appear on national television to discuss warming trends, in a fit of spite, he refused to appear at the same time as the co-author of the UAH dataset, Dr. Roy Spencer.
A breakdown of several climate datasets, appearing below in degrees centigrade per decade, indicates there are significant discrepancies in estimated climate trends:
- AIRS: +0.24 (from the 2019 Susskind et al. study)
- GISTEMP: +0.22
- ECMWF: +0.20
- RSS LT: +0.20
- Cowtan & Way: +0.19
- UAH LT: +0.18
- HadCRUT4: +0.17
Which climate dataset is the right one? Interestingly, the HadCRUT4 dataset, which is managed by a team in the United Kingdom, uses most of the same data GISTEMP uses from the National Oceanic and Atmospheric Administration’s Global Historical Climate Network.
by Dr Roy Spencer, April 23, 2019 in GlobalWarming
This post has two related parts. The first has to do with the recently published study of AIRS satellite-based surface skin temperature trends. The second is our response to a rather nasty Twitter comment maligning our UAH global temperature dataset that was a response to that study.
Furthermore, that period (January 2003 through December 2017) shows significant warming even in our UAH lower tropospheric temperature (LT) data, with a trend 0.01 warmer than the “gold standard” HadCRUT4 surface temperature dataset (all deg. C/decade):
Cowtan & Way: +0.19
UAH LT: +0.18
I’m pretty sure the Susskind et al. paper was meant to prop up Gavin Schmidt’s GISTEMP dataset, which generally shows greater warming trends than the HadCRUT4 dataset that the IPCC tends to favor more. It remains to be seen whether the AIRS skin temperature dataset, with its “clear sky bias”, will be accepted as a way to monitor global temperature trends into the future.
What Satellite Dataset Should We Believe?
by Ross McKitrick, March1, 2019 in WUWT
Ben Santer et al. have a new paper out in Nature Climate Change arguing that with 40 years of satellite data available they can detect the anthropogenic influence in the mid-troposphere at a 5-sigma level of confidence. This, they point out, is the “gold standard” of proof in particle physics, even invoking for comparison the Higgs boson discovery in their Supplementary information.
The fact that in my example the t-statistic on anthro falls to a low level does not “prove” that anthropogenic forcing has no effect on tropospheric temperatures. It does show that in the framework of my model the effects are not statistically significant. If you think the model is correctly-specified and the data set is appropriate you will have reason to accept the result, at least provisionally. If you have reason to doubt the correctness of the specification then you are not obliged to accept the result.
This is the nature of evidence from statistical modeling: it is contingent on the specification and assumptions. In my view the second regression is a more valid specification than the first one, so faced with a choice between the two, the second set of results is more valid. But there may be other, more valid specifications that yield different results.
In the same way, since I have reason to doubt the validity of the Santer et al. model I don’t accept their conclusions. They haven’t shown what they say they showed. In particular they have not identified a unique anthropogenic fingerprint, or provided a credible control for natural variability over the sample period. Nor have they justified the use of Gaussian p-values. Their claim to have attained a “gold standard” of proof are unwarranted, in part because statistical modeling can never do that, and in part because of the specific problems in their model.
by JC Maurin, 22 février 2019 in ScienceCimatEnergie
Afin d’élaborer les indicateurs de température, on utilise des radiomètres MSU, AMSU ou ATMS embarqués sur des satellites, puis on construit l’indicateur à partir des mesures et de diverses corrections. On obtient ainsi un indicateur qui concerne la quasi-totalité du globe, contrairement aux indicateurs terrestres basés essentiellement (avant 1980) sur quelques milliers de stations américaines et européennes. Au sujet des mesures par satellites, et sans être spécialiste dans ce domaine, un physicien peut néanmoins donner quelques éléments d’appréciation qu’ignore parfois un lecteur intéressé par la climatologie. Le but de la seconde partie de l’article sera atteint si ce lecteur a appris des éléments nouveaux, il pourra ensuite approfondir la question par lui-même.
by J.C. Maurin, 8 février 2019, in ScienceClimatEnergie
A partir des notions intuitives de chaleur et température, les physiciens (Carnot, Thomson, Clausius, Maxwell, Boltzmann) arrivèrent progressivement à la notion scientifique de température thermodynamique. La Conférence Générale des Poids et Mesures adopta en 1927 l’échelle thermodynamique proposée en 1911, puis l’unité kelvin en 1954.
La notion de température thermodynamique nécessite que l’équilibre thermique soit atteint, ce qui n’est pas le cas dans l’atmosphère de la Terre. Il n’existe pas une « température thermodynamique de l’atmosphère ». A défaut, on utilise une « moyenne des températures » mesurées en divers points de l’atmosphère. Mais la température thermodynamique étant une grandeur intensive, une moyenne, quelle que soit son élaboration, ne peut jouer qu’un rôle d’indicateur. L’usage est néanmoins d’utiliser le kelvin pour les indicateurs. On exprimera de préférence les variations des indicateurs sous forme relative. L’indicateur va être dépendant de l’échantillonnage (spatial et temporel) des mesures et surtout de son mode d’élaboration.
by Bob Tisdale, December 8, 2018 in WUWT
In this post, we’re going to present monthly TMIN and TMAX Near-Land Surface Air Temperature data for the Northern and Southern Hemispheres (not in anomaly form) in an effort to add a little perspective to global warming. And at the end of this post, I’m asking for your assistance in preparing a post especially for you, the visitors to this wonderful blog WattsUpWithThat.
INTRODUCTION FOR THE “GLOBAL WARMING IN PERSPECTIVE” SERIES
A small group of international unelected bureaucrats who serve the United Nations now wants to limit the rise of global land+ocean surface temperatures to no more 1.5 deg C from pre-industrial times…even though we’ve already seen about 1.0 deg C of global warming since then. So we’re going to put that 1.0 deg C change in global surface temperatures in perspective by examining the ranges of surface temperatures “we’ve been used to” on our lovely shared home Earth.
The source of the quote in the title of this post is Gavin Schmidt, who is the Director of the NASA GISS (Goddard Institute of Space Studies). It is from a 2014 post at the blog RealClimate, and, specifically, that quote comes from the post Absolute temperatures and relative anomalies (Archived here.). The topic of discussion for that post at RealClimate was the wide span of absolute global mean temperatures [GMT, in the following quote] found in climate models. Gavin wrote (my boldface):
by M.D., 3 décembre 2018 in MythesManciesMathématiques
Que savait-on en décembre 2015, que sait-on en décembre 2018 ?
Températures globales depuis 1979 selon trois sources (1979 est l’année origine des relevés par satellites).
by Mark Fife, November 30, 2018 in WUWT
We have looked at quality, long term records from three different regions. Two of these are on opposite sides of the North Atlantic, one is in the South Pacific. The two regions bordered by the North Atlantic are similar, but not identical. The record from Australia is only similar in that temperature has varied over time and has warmed in the recent past.
In all three regions there is no evidence of any strong correlation to CO2. There is ample evidence to support a conjecture of little to no influence.
There is ample evidence, widely shown in other studies, of localized influence due to development and population growth. The CET record has a correlation of temperature to CO2 of 0.54, which is the highest correlation of any individual record in this study. This area is also the most highly developed. While this does not constitute proof, it does tend to support the supposition the weak CO2 signal is enhanced by a coincidence between rising CO2 and rising development and population.
The efficacy of combining US records with those records from Greenland, Iceland, and the UK may be subject to opinion. However, there is little doubt combining records from Australia would create an extremely misleading record. Like averaging a sine curve and a cosine curve.
It appears the GISS data set does a poor job of estimating the history of temperature in all three regions. It shows a near perfect correlation to CO2 levels which is simply not reflected in any of the individual or regional records. There are probably numerous reasons for this. I would conjecture the reasons would include the influence of short-term temperature record bias, development and population growth bias, and data estimation bias. However, a major source of error could be attributed to the simple mistake of averaging regions where the records simply are too dissimilar for an average to yield useful information.
by J.R. Christy et al., March 8, 2018 in InternJournRemoteSensing
The Intergovernmental Panel on Climate Change Assessment Report 5 (IPCC AR5, 2013) discussed bulk atmospheric temperatures as indicators of climate variability and change. We examine four satellite datasets producing bulk tropospheric temperatures, based on microwave sounding units (MSUs), all updated since IPCC AR5. All datasets produce high correlations of anomalies versus independent observations from radiosondes (balloons), but differ somewhat in the metric of most interest, the linear trend beginning in 1979. The trend is an indicator of the response of the climate system to rising greenhouse gas concentrations and other forcings, and so is critical to understanding the climate. The satellite results indicate a range of near-global (+0.07 to +0.13°C decade−1) and tropical (+0.08 to +0.17°C decade−1) trends (1979–2016), and suggestions are presented to account for these differences. We show evidence that MSUs on National Oceanic and Atmospheric Administration’s satellites (NOAA-12 and −14, 1990–2001+) contain spurious warming, especially noticeable in three of the four satellite datasets.
Comparisons with radiosonde datasets independently adjusted for inhomogeneities and Reanalyses suggest the actual tropical (20°S-20°N) trend is +0.10 ± 0.03°C decade−1. This tropical result is over a factor of two less than the trend projected from the average of the IPCC climate model simulations for this same period (+0.27°C decade−1).
by Renee Hannon, October 29, 2018 in WUWT
This post is a coarse screening assessment of HadCRUT4 global temperature anomalies to determine the impact, if any, of data quality and data coverage. There has been much discussion on WUWT about the quality of the Hadley temperature anomaly dataset since McLean’s Audit of the HadCRUT4Global Temperature publication which is paywalled. I purchased a copy to see what all the hub-bub was about, and it is well worth the $8 in my view. Anthony Watts’ review of McLean’s findings and executive summary can be found here.
A key chart for critical study is McLean’s Figure 4.11 in his report. McLean suggests that HadCRUT4 data prior to 1950 is unreliable due to inadequate global coverage and high month-to-month temperature variability. For this post, I subdivided McLean’s findings into three groups shown with added shading: Good data which covers the years post-1950. During this period global data coverage is excellent at greater than 75% and month-to-month temperature variation is low. Questionable data occurs from 1880 to 1950. During this period global data coverage ranged from 40% to 70% with higher monthly temperature variations. Poor data is pre-1880 when global coverage ranged from 14 to 25% with extreme monthly temperature variations.
by K. Richard, October 8, 2018 in NoTricksZone
A new paper documents “remarkably different” land temperatures from one instrumental data set to another. In some regions there is as much as an 0.8°C conflict in recorded temperature anomalies for CRU, NASA, BEST, and NOAA. The relative temperature trend differences can reach 90% when comparing instrumental records. Consequently, the uncertainty in instrumental temperature trends — “0.097–0.305°C per decade for recent decades (i.e., 1981–2017)” — is as large or larger than the alleged overall warming trend itself for this period.
by David Middleton, October 4, 2018 in WUWT
95% of the model runs predicted more warming than the RSS data since 1988… And this is the Mears-ized RSS data, the one in which the measurements were influenced to obtain key information (erase the pause and more closely match the surface data).
Their “small discrepancy” would be abject failure in the oil & gas industry.
The observed warming has been less than that expected in a strong mitigation scenario (RCP4.5).
Output of 38 RCP4.5 models vs observations. The graph is originally from Carbon Brief. I updated it with HadCRUT4, shifted to 1970-2000 baseline, to demonstrate the post-El Niño divergence.
by Anthony Watts, September 28, 2018 in WUWT
These results come from the SABER instrument onboard NASA’s TIMED satellite. SABER monitors infrared emissions from carbon dioxide (CO2) and nitric oxide (NO), two substances that play a key role in the energy balance of air 100 to 300 kilometers above our planet’s surface. By measuring the infrared glow of these molecules, SABER can assess the thermal state of gas at the very top of the atmosphere–a layer researchers call “the thermosphere.”
When the thermosphere cools, it shrinks, literally decreasing the radius of Earth’s atmosphere. This shrinkage decreases aerodynamic drag on satellites in low-Earth orbit, extending their lifetimes. That’s the good news. The bad news is, it also delays the natural decay of space junk, resulting in a more cluttered environment around Earth.
by Dr. S. Lüning and Prof. F. Vahrenholt, August 19, 2018 in NoTricksZone
Temperatures can be measured from the ground and from satellites. Satellite data have two versions, UAH and RSS. The version of UAH (University of Alabama, Huntsville) makes a solid impression. The RSS version shows larger deviations and suggests a stronger warming.
Doping the data
Both datasets surely get their data from similar satellites. The explanation lies in a “post-processing” of the measured values by the RSS group. In the chart below you can see the old version in red.
Global temperature based on RSS satellite measurements. From Climate4You Newsletter June 2018.
by Judith Curry, June 23, 2018 in ClimateEtc.
Assuming that the uncertainty in GIA adjustments are ‘in the noise’ of global sea level rise may not be entirely justified. The adjustments to the satellite data that emerged in the discussion between Morner and Nerem do not inspire confidence in the estimate of sea level rise from satellite data, and the low level of stated uncertainty strains credulity.
See also here
by F. Bosse and F. Vahrenholt in P. Gosselin, May 25, 2018 NoTricksZone
The sun was inactive in April, as we currently find ourselves in the minimum between solar cycle (SC) 24 and the coming solar cycle 25.
The recorded mean sunspot number (SSN) for April was 8.9, which is only 28% of what is usual 113 months into a solar cycle. In April, 16 days were spotless. The following chart shows sunspot activity (…)
by Lovell, A.M. et al., 2017 in CO2Science, May 24, 2018
In describing their findings, Lovell et al. state that “between 1972 and 2013, 36% of glacier termini in the entire study area advanced and 25% of glacier termini retreated, with the remainder showing no discernible change outside of the measurement error (± 66 m or ± 1.6 m yr-1) and classified as ‘no change'” (see figure below). Although there were some regional differences in glacier termini changes, these regions over the last four decades were more closely linked to non-climatic drivers, such as terminus type and geometry, than any obvious climatic or oceanic forcing.”
See aslo : Terrifying Times For Climate Alarmists
by Tony Heller, May 22, 2018 in TheDeplorableClimateScienceBlog
Settled science at NASA means constantly rewriting the past. Here are a few of the NASA Reykjavik, Iceland temperature graphs I have captured over the past six years.
by Christy J.R. et al., April 6, 2018, in CO2Science
Monitoring temperature and creating regional and global temperature data sets is a tricky business. There are many factors that can induce spurious trends in the data; and there are multiple protocols to follow to ensure their proper construction. Consequently, many people (including scientists) have found themselves wondering which of all the temperature data sets is the most accurate for use in determining the impact of rising greenhouses gases on atmospheric temperature? Thanks to the recently published work of Christy et al. (2018), we now have a pretty good idea as to the answer.
by Anthony Watts, April 6, 2018 in WUWT
Weather Satellite Wanders Through Time, Space, Causing Stray Warming to Contaminate Data
In the late 1990s, the NOAA-14 weather satellite went wandering through time and space, apparently changing the record of Earth’s climate as it went.
Designed for an orbit synchronized with the sun, NOAA-14’s orbit from pole to pole was supposed to cross the equator at 1:30 p.m. on the sunlit side of the globe and at 1:30 a.m. on the dark side, 14 times each day. One of the instruments it carried was a microwave sounding unit (MSU), which looked down at the world and collected data on temperatures in Earth’s atmosphere and how those temperatures changed through time.
by JoNova, March 18, 2018
A funny thing happens when you line up satellite and surface temperatures over Australia. A lot of the time they are very close, but some years the surface records from the Australian Bureau of Meteorology (BOM) are cooler by a full half a degree than the UAH satellite readings. Before anyone yells “adjustments”, this appears to be a real difference of instruments, but solving this mystery turns up a rather major flaw in climate models (…)