Archives par mot-clé : Model(s)

CMIP6 GCM ensemble members versus global surface temperatures

by N. Scafetta, Sep 18, 2022 in Springer


Abstract

The Coupled Model Intercomparison Project (phase 6) (CMIP6) global circulation models (GCMs) predict equilibrium climate sensitivity (ECS) values ranging between 1.8 and 5.7 C. To narrow this range, we group 38 GCMs into low, medium and high ECS subgroups and test their accuracy and precision in hindcasting the mean global surface warming observed from 1980–1990 to 2011–2021 in the ERA5-T2m, HadCRUT5, GISTEMP v4, and NOAAGlobTemp v5 global surface temperature records. We also compare the GCM hindcasts to the satellite-based UAH-MSU v6 lower troposphere global temperature record. We use 143 GCM ensemble averaged simulations under four slightly different forcing conditions, 688 GCM member simulations, and Monte Carlo modeling of the internal variability of the GCMs under three different model accuracy requirements. We found that the medium and high-ECS GCMs run too hot up to over 95% and 97% of cases, respectively. The low ECS GCM group agrees best with the warming values obtained from the surface temperature records, ranging between 0.52 and 0.58 C. However, when comparing the observed and GCM hindcasted warming on land and ocean regions, the surface-based temperature records appear to exhibit a significant warming bias. Furthermore, if the satellite-based UAH-MSU-lt record is accurate, actual surface warming from 1980 to 2021 may have been around 0.40 C (or less), that is up to about 30% less than what is reported by the surface-based temperature records. The latter situation implies that even the low-ECS models would have produced excessive warming from 1980 to 2021. These results suggest that the actual ECS may be relatively low, i.e. lower than 3 C or even less than 2 C if the 1980–2021 global surface temperature records contain spurious warming, as some alternative studies have already suggested. Therefore, the projected global climate warming over the next few decades could be moderate and probably not particularly alarming.

Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

by B.R. Scanlon et al., Jan 22, 2018 in PNAS


Significance

We increasingly rely on global models to project impacts of humans and climate on water resources. How reliable are these models? While past model intercomparison projects focused on water fluxes, we provide here the first comprehensive comparison of land total water storage trends from seven global models to trends from Gravity Recovery and Climate Experiment (GRACE) satellites, which have been likened to giant weighing scales in the sky. The models underestimate the large decadal (2002–2014) trends in water storage relative to GRACE satellites, both decreasing trends related to human intervention and climate and increasing trends related primarily to climate variations. The poor agreement between models and GRACE underscores the challenges remaining for global models to capture human or climate impacts on global water storage trends.

Abstract

Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated.

Models, Climate Scientists Wrong Again…New Study Finds Jet Stream Strengthening, Not Weakening

by P. Gosselin,  Aug 9, 2022 in NoTricksZone


Alarmist climate research centers like the Potsdam Institute and the unquestioning media have been claiming for years that the Jet Stream is weakening, hence this would lead to greater weather extremes across the northern hemisphere due to blocking. Responsible for this of course is man-made global warming.

Hat-tip: The Klimaschau

But a recent paper by Samantha Hallam et al published in the journal Climate Dynamics looks at the seasonal to decadal variations in Northern Hemisphere jet stream latitude and speed over land for the period 1871–2011. The authors were unable to find any weakening of the sort climate alarmists have been warning about.

Quite to the contrary, the authors in fact found that the winter jet stream over the North Atlantic and Eurasia has increased in average speed by 8% to 132 mph. The authors found the 141-year trends in jet latitude and speed show differences on a regional basis and that jet speed shows significant increases evident in winter (up to 4.7 ms −1 ), spring and autumn over the North Atlantic, Eurasia and North America. Over the North Pacific, no increase was observed.

Moreover, the Jet Stream was found to have shifted northward by some 330 kilometers. Overall, the paper’s findings contradict the claims of a weakening Jet Stream regularly made by the climate alarmists and their media minions.

Applying climate alarmist science, we’d have to conclude now, due to the strengthening Jet Stream, less weather extremes should be expected. This would be good news of course. But don’t expect the fear-porn media to look at this.

Updated Atmospheric CO2 Concentration Forecast Through 2050 and Beyond

by R. Spencer, July 18, 2022 in WUWT


Summary

The simple CO2 budget model I introduced in 2019 is updated with the latest Mauna Loa measurements of atmospheric CO2 and with new Energy Information Administration estimates of global CO2 emissions through 2050. The model suggests that atmospheric CO2 will barely double pre-industrial levels by 2100, with a total radiative forcing of the climate system well below the most extreme scenario (RCP8.5) used in alarmist literature (and the U.S. national climate assessment), with the closest match to RCP4.5. The model also clearly show the CO2 reducing effect of the Mt. Pinatubo eruption of 1991.

‘A Significant And Robust Cooling Trend’ In The Southern Ocean From 1982–2020 Defies Climate Models

by K. Richard, June 27, 2022 in NoTricksZone


A new study reports there has been a -0.3°C cooling in the Southern Ocean since 1982 per multiple observational data sets. The authors detail the “failure of CMIP5 models in simulating the observed SST cooling in the Southern Ocean.”

The Southern Ocean is today about 1-2°C colder than it has been for nearly all of the last 10,000 years (Shuttleworth et al., 2021, Civel-Mazens et al., 2021, Ghadi et al., 2020).

Image Source: Shuttleworth et al., 2021

Why IPCC Climate Forecasts Are So Dodgy

by R. Barmy, May 5, 2022 in ClimateChangeDispatch


This is the fourth in a series of articles on the IPCC’s AR6 WG1 report. –CCD ed.

Margaret Thatcher helped create the United Nations Intergovernmental Panel on Climate Change (IPCC) in 1988. As an Oxford-trained chemist, she understood scientific principles and was concerned that we “… do not live at the expense of future generations.”

By 2002, the Iron Lady turned against global warming extremism by stating in her book Statecraft: Strategies for a Changing World, “What is far more apparent is that the usual suspects on the left have been exaggerating the dangers and simplifying solutions in order to press their agenda…” [bold, links added]

Thatcher’s comments of exaggeration and simplification were a prescient critique of the IPCC report Climate Change 2021: The Physical Sciences Basis.

The IPCC uses computer simulations to predict climate dangers and test solutions. An important step in the computer simulation of a real-world physical process is making sure the simulator can replicate the known history of that physical process.

If a computer model can accurately replicate a significant history of a known process, called hindcasting, it lends credibility that the correct equations are being used and will be able to predict future events.

AR6 Model Failure Affirmed: ‘No Model Group Succeeds Reproducing Observed Surface Warming Patterns’

by K. Richard, Apr 25, 2022 in NoTricksZone


A new study published in Geophysical Research Lettershighlights the abysmal model performance manifested in the latest Intergovernmental Panel on Climate Change report (AR6). The 38 CMIP6 general circulation models (GCMs) fail to adequately simulate even the most recent (1980-2021) warming patterns over 60 to 81% of the Earth’s surface.

Dr. Scafetta places particular emphasis on the poor performance of the highly uncertain estimates (somewhere between 1.83 and 5.67°C) of equilibrium climate sensitivity (ECS) and their data-model agreement relative to 1980-2021 global warming patterns.

The worst-performing ECS estimates are the ones projecting 3-4.5°C and 4.5-6°C warming in response to doubled CO2 concentrations (to 560 ppm) plus feedbacks, as the 1980-2021 temperature trends are nowhere close to aligning with these trajectories.

Instead, the projected global warming by 2050 (~2°C relative to 1750) associated with the lowest ECS estimates and implied by the warming observed over the last 40+ years is characterized as “unalarming” even with the most extreme greenhouse gas emissions (no mitigation efforts undertaken) growth rate.

In addition to the conclusion that “no model group succeeds reproducing observed surface warming patterns,” poor modeling of heat transfer physics, ocean and atmospheric circulation patterns, polar sea ice processes…is also evident in the latest IPCC report.

“Accurately reproducing regional temperature differences over the past 40+ years is beyond the capability of climate model simulations, and even fails for major ocean basins and continents.”

The fundamental modeling failures in simulating responses to sharply rising greenhouse gas emissions over the last 40+ years “calls into question model-based attribution of climate responses to anthropogenic forcing.”

Can Computer Models Predict Climate?

by Dr C. Essex, Apr 13,  2022 in BigPicturesNews


Guest post by Christopher Essex, Emeritus Professor of Mathematics and Physics, University of Western Ontario.

Christopher Essex

By Dr Christopher Essex

It is well known that daytime winter temperatures on Earth can fall well below -4°F (-20℃ ) in some places, even in midlatitudes, despite warming worries. Sometimes the surface can even drop below -40°F (-40℃ ), which is comparable to the surface of Mars. What is not so well known is that such cold winter days are colder than they would be with no atmosphere at all!

How can that be if the atmosphere is like a blanket, according to the standard greenhouse analogy? If the greenhouse analogy fails, what is climate?

Climate computer models in the 1960s could not account for this non-greenhouse-like picture. However modern computer models are better than those old models, but the climate implications of an atmosphere that cools as well as warms has not been embraced. Will computer models be able to predict climate after it is? The meteorological program for climate has been underway for more than 40 years. How did it do?

Feynman, Experiment and Climate Models
“Model” is used in a peculiar manner in the climate field. In other fields, models are usually formulated so that they can be found false in the face of evidence. From fundamental physics (the Standard Model) to star formation, a model is meant to be put to the test, no matter how meritorious.

New Study: The CO2-Drives-Global-Warming ‘Concept’ Is ‘Obsolete And Incorrect’

by K. Richard, Mar 14, 2022 in NoTricksZone


“The IPCC concept that increasing carbon dioxide in the atmosphere causes global warming is three decades out-of-date.”  − Lightfoot and Ratzer (2022), Journal of Basic & Applied Sciences

In analyzing UAH global temperature and Mauna Loa CO2 records from 1979 to 2021, climate researchers Lightfoot and Ratzer (2022) report there has been “little, if any” correlation between these two variables during this period.

They assert that between 91 and 98% of Earth’s greenhouse gas effect is from water vapor, as CO2 and other trace gases contribute less than 5% to greenhouse gas forcing.

A solar minimum has just began in the current solar cycle 25. The declining solar output is projected to eventually lead to a ~1 to 1.2°C cooling over the next 30 to 40 years. Solar minimum periods are also accompanied by crop failures due to frost and weather extremes delivering excessive heat.

The authors conclude by suggesting the popularized conceptualization of CO2 as a driver of global warming has proven to be “obsolete and incorrect”.

Image Source: Lightfoot and Ratzer, 2022

German Paper: “A Mild Additional Temperature Rise Of Around 1°K”… Drop Not Excluded By 2100!

by P. Gosselin, Mar 6, 2022 in NoTricksZone


In its most recent video, German site Die kalte Sonne here looks at a paper on CO2 climate forcing by Stefani 2021: Solar and Anthropogenic Influences on Climate: Regression Analysis and Tentative Predictions. The results point to only a moderately warming planet up to the year 2150.

To hype up climate warming alarm, IPCC scientists like to exaggerate CO2’s power to trap heat and warm up the atmosphere. But with every assessment report that the IPCC issues, the estimated value by which CO2 warms the planet steadily gets reduced as the observed warming keeps lagging behind what earlier models predicted.

In his paper, Frank Stefani and his team at the Helmholtz Center, Institute of Fluid Dynamics in Dresden, Germany looked at the impacts by CO2 and solar activity.

On average 1.1°C warming

Using double regression, the scientists evaluated linear combinations of the logarithm of the carbon dioxide concentration and the geomagnetic aa index as a proxy for solar activity. They reproduced the sea surface temperature (HadSST) since the middle of the 19th and ended up with a a climate sensitivity (of TCR type) in the range of 0.6 K until 1.6 K per doubling of CO2. The midpoint of this range is 1.1°C, a value many critical climate scientists have already estimated earlier, and thus far below the IPCC scary estimates.

The paper’s abstract elaborates further:

Pielke Jr. on IPCC AR6 WG2 Release

by Pielke Jr., Feb28, 2022 in WUWT


An initial thread on the IPCC AR6 WG2 report released today

Whereas WG1 received a mixed review in my areas of expertise (specifically: poor on scenarios, solid on extremes), my initial reaction to the WG2 report is that it is an exceedingly poor assessment

The first observation is that the report is more heavily weighted to implausible scenarios than any previous IPCC assessment report

In particular, RCP8.5 represents ~57% of scenario mentions

This alone accounts for the apocalyptic tone and conclusions throughout the report.

Why We Must “Quit Worrying About Uncertainty in Sea Level Projections”

by K. Hansen, Dec 2, 2021 in CO2Coalition


It is an interesting read but not because it presents good advice to the scientific community.  Rather, it presents the case that climate and ice models, which are used to make projections, are not up to the task.  While those who program climate models have been trained in what we know about the basic physics involved in the biggest sea level rise issue – ice sheet dynamics – the actual projections by those models depend on parameters that are loose guesses about things we don’t know.  As a result, Bassis says “…recent studies using climate and ice sheet models are, more and more often, coming to very different conclusions about future rates of sea level rise and even about the sensitivity of ice sheets to future warming…”  and because of that, he tells us:

“Large discrepancies among model projections of long-term sea level rise have spawned calls among the scientific community for scientists to work on reducing uncertainty. However, focusing on uncertainty is a trap we must avoid. Instead, we should focus on the adaptation decisions we can already make on the basis of current models and communicating and building confidence in models for longer-term decisions.”

Kip Hansen is an expert on sea level and sea-level rise. Prolific author of numerous articles on the subjects. WUWT lists 445 commentaries and articles.

He has spent much of his adult life at sea, first as an officer on a merchant ship, and later as a USCG-licensed captain in the Caribbean, where he sailed with his wife while doing humanitarian work (mostly Dominican Republic).

He is a proud member of the CO2 Coalition.

 This commentary was first published by the CO2 Coalition, December 3, 2021

Physicists: Climate Model Error Overestimates CO2 Impact On Global Temps By Factor Of 5

by K. Richard, Nov 22, 2021 in NoTricksZone


A new study suggests CO2 molecules have little consequential impact affecting outgoing radiation, and that climate models attribute global temperature effects to CO2 that are fundamentally erroneous.

Russian physicists (Smirnov and Zhilyaev, 2021) have published a peer-reviewed paper in the Advances in Fundamental Physics Special Issue for the journal Foundations.

They assesses the role of CO2 molecules in the standard atmosphere and assert “we have a contradiction with the results of climatological models in the analysis of the Earth’s greenhouse effect.”

Key points from the paper include the following:

1. Climate model calculations of CO2’s impact on global temperatures are in error by a factor of 5 as a result of “ignoring, in climatological models, the Kirchhoff law” which says radiators are “simultaneously the absorbers.”

2. Change in the concentration of an optically active atmospheric component (like CO2) “would not lead to change in the outgoing radiative flux.”

3. CO2 molecules “are not the main radiator of the atmosphere.” Water vapor molecules are, and thus they “may be responsible for the observed heating of the Earth.”

How good have climate models been at truly predicting the future?

by Gavin, Dec 4, 2019 in RealClimate


A new paper from Hausfather and colleagues (incl. me) has just been published with the most comprehensive assessment of climate model projections since the 1970s. Bottom line? Once you correct for small errors in the projected forcings, they did remarkably well.

Climate models are a core part of our understanding of our future climate. They also have been frequently attacked by those dismissive of climate change, who argue that since climate models are inevitably approximations they have no predictive power, or indeed, that they aren’t even scientific.

In an upcoming paper in Geophysical Research Letters, Zeke Hausfather, Henri Drake, Tristan Abbott and I took a look at how well climate models have actually been able to accurately project warming in the years after they were published. This is an extension of the comparisons we have been making on RealClimate for many years, but with a broader scope and a deeper analysis. We gathered all the climate models published between 1970 and the mid-2000s that gave projections of both future warming and future concentrations of CO2 and other climate forcings – from Manabe (1970) and Mitchell (1970) through to CMIP3 in IPCC 2007.

We found that climate models – even those published back in the 1970s – did remarkably well, with 14 out of the 17 projections statistically indistinguishable from what actually occurred.

We evaluated these models both on how well modeled warming compared with observed warming after models were published, and how well the relationship between warming and CO2 (and other climate forcings) in models compares to observations (the implied transient climate response) (see Figure). The second approach is important because even if an old model had gotten all the physics right, the future projected warming would be off if they assumed we would have 450 ppm CO2 in 2020 (which some did!). Future emissions depend on human societal behavior, not physical systems, and we can usefully distinguish evaluation of climate models physics from paths of future concentrations.

Figure 2 from Hausfather et al (2019) showing the comparisons between model predictions and observations for a) the temperature trends (above) and b) the implied Transient Climate Response (TCR) which is the trend divided by the forcing and scaled to an equivalent 2xCO2 forcing.

Critical Solar Factors Ignored…IPCC AR6 Covers Up Scientific Flaws In Climate Models

by P. Gosselin, Aug 22, 2021 in NoTricksZone


According to the latest IPCC Assessment Report 6 (AR6), the observed temperature increase and the calculated temperature increase according to climate models have been almost the same 1.3 °C from 1750 to 2020.  The report shows a strong positive trend in solar shortwave radiation from 9/2000 to  6/2017, but its impact has been omitted in post-2000 warming calculations which explains the high temperatures since El Nino of 2015-2016.

For example, the temperature effect in 2019 is about 0.7 °C according to the AR6 science. Actually, the IPCC models give a 2019 temperature increase of 2.0°C (1.3°C + 0.7°C). This 54 percent error is due to the positive water feedback applied in climate models, which doubles the impact of other climate forcings and which, according to this natural experiment by climate, does not exist.

The amount of carbon dioxide in the atmosphere has increased by 32% since 1750. According to the AR6, this is only due to man-made anthropogenic emissions staying there (remain, accumulate) by an average of 44% per year and the rest has been absorbed by oceans and vegetation.

Approximately 25% of the atmospheric carbon dioxide changes annually from the oceans and vegetation. As a result, less than 6% of the initial amount of carbon dioxide in the atmosphere remains after 10 years, and therefore the increased amount of carbon dioxide in the atmosphere cannot be entirely anthropogenic origin with a permille value of -28%. The IPCC remains silent on permille values, as in the AR6 there is no word “permille”, which is a measure of the ratio of carbon isotopes and it has been used to analyze the origin of carbon dioxide, suitable for validating carbon cycle models.

Cover-up

The cover-up of this issue continues with the anthropogenic carbon dioxide lifetime in the atmosphere, which is now vaguely from hundreds of years to thousands of years. The removal rate of radioactive carbon from the atmosphere (a perfect tracer test for anthropogenic carbon dioxide) after 1964 is only 64 years. The recovery time of the total atmospheric amount of carbon dioxide to the level of 1750 can be estimated to be similar to that of its accumulation period, i.e. just under 300 years.

The AR6 report no longer shows the IPCC’s very own definition of the greenhouse effect, except in the glossary. The definition no longer contains the description for how greenhouse gas absorption of 158 Wm-2, which causes the greenhouse effect, creates downward infrared radiation downwards on the ground of 342 Wm-2. This is against fundamental physical laws because energy comes from nothing. The radiation to the surface consists of four energy fluxes, which according to the IPCC’s energy balance are: greenhouse gas absorption of 158 Wm-2, latent water heat 82 Wm-2, sensible heat (warm air) 21 Wm-2, and solar radiation absorption in the atmosphere 80 Wm-2. The three firstly mentioned energy fluxes totaling 261 Wm-2 maintain the greenhouse effect.

Fudging the forcings

The IPCC’s attribution methodology is fundamentally flawed

by R. McKitrick, Aug 18, 2021 in ClimateEtc.


One day after the IPCC released the AR6 I published a paper in Climate Dynamics showing that their “Optimal Fingerprinting” methodology on which they have long relied for attributing climate change to greenhouse gases is seriously flawed and its results are unreliable and largely meaningless. Some of the errors would be obvious to anyone trained in regression analysis, and the fact that they went unnoticed for 20 years despite the method being so heavily used does not reflect well on climatology as an empirical discipline.

My paper is a critique of “Checking for model consistency in optimal fingerprinting” by Myles Allen and Simon Tett, which was published in Climate Dynamics in 1999 and to which I refer as AT99. Their attribution methodology was instantly embraced and promoted by the IPCC in the 2001 Third Assessment Report (coincident with their embrace and promotion of the Mann hockey stick). The IPCC promotion continues today: see AR6 Section 3.2.1. It has been used in dozens and possibly hundreds of studies over the years. Wherever you begin in the Optimal Fingerprinting literature (example), all paths lead back to AT99, often via Allen and Stott (2003). So its errors and deficiencies matter acutely.

The abstract of my paper reads as follows:

New Confirmation that Climate Models Overstate Atmospheric Warming

by R. McKitrick, Aug 17, 2021 in ClimateEtc


Two new peer-reviewed papers from independent teams confirm that climate models overstate atmospheric warming and the problem has gotten worse over time, not better. The papers are Mitchell et al. (2020) “The vertical profile of recent tropical temperature trends: Persistent model biases in the context of internal variability”  Environmental Research Letters, and McKitrick and Christy (2020) “Pervasive warming bias in CMIP6 tropospheric layers” Earth and Space Science. John and I didn’t know about the Mitchell team’s work until after their paper came out, and they likewise didn’t know about ours.

Mitchell et al. look at the surface, troposphere and stratosphere over the tropics (20N to 20S). John and I look at the tropical and global lower- and mid- troposphere.  Both papers test large samples of the latest generation (“Coupled Model Intercomparison Project version 6” or CMIP6) climate models, i.e. the ones being used for the next IPCC report, and compare model outputs to post-1979 observations. John and I were able to examine 38 models while Mitchell et al. looked at 48 models. The sheer number makes one wonder why so many are needed, if the science is settled. Both papers looked at “hindcasts,” which are reconstructions of recent historical temperatures in response to observed greenhouse gas emissions and other changes (e.g. aerosols and solar forcing). Across the two papers it emerges that the models overshoot historical warming from the near-surface through the upper troposphere, in the tropics and globally.

How Climate Scenarios Lost Touch With Reality

by R. Pielke Jr & J. Ritchie, Aug, 4, 2021 in CO2Coalition


A failure of self-correction in science has compromised climate science’s ability to provide plausible views of our collective future.

The integrity of science depends on its capacity to provide an ever more reliable picture of how the world works. Over the past decade or so, serious threats to this integrity have come to light. The expectation that science is inherently self-correcting, and that it moves cumulatively and progressively away from false beliefs and toward truth, has been challenged in numerous fields—including cancer research, neuroscience, hydrology, cosmology, and economics—as observers discover that many published findings are of poor quality, subject to systemic biases, or irreproducible.

In a particularly troubling example from the biomedical sciences, a 2015 literature review found that almost 900 peer-reviewed publications reporting studies of a supposed breast cancer cell line were in fact based on a misidentified skin cancer line. Worse still, nearly 250 of these studies were published even after the mistaken cell line was conclusively identified in 2007. Our cursory search of Google Scholar indicates that researchers are still using the skin cancer cell line in breast cancer studies published in 2021. All of these erroneous studies remain in the literature and will continue to be a source of misinformation for scientists working on breast cancer.

In 2021, climate research finds itself in a situation similar to breast cancer research in 2007. Our research (and that of several colleagues) indicates that the scenarios of greenhouse gas (GHG) emissions through the end of the twenty-first century are grounded in outdated portrayals of the recent past. Because climate models depend on these scenarios to project the future behavior of the climate, the outdated scenarios provide a misleading basis both for developing a scientific evidence base and for informing climate policy discussions. The continuing misuse of scenarios in climate research has become pervasive and consequential—so much so that we view it as one of the most significant failures of scientific integrity in the twenty-first century thus far. We need a course correction.

In calling for this change, we emphasize explicitly and unequivocally that human-caused climate change is real, that it poses significant risks to society and the environment, and that various policy responses in the form of mitigation and adaptation are necessary and make good sense. However, the reality and importance of climate change does not provide a rationale or excuse for avoiding questions of research integrity any more than does the reality and importance of breast cancer. To the contrary, urgency makes attention to integrity that much more important.

Scenarios and baselines

Are Climate Feedbacks Strongly Non-Linear?

by Bob Irvine, Aug 3, 2021 in WUWT


Is it possible that the Earth’s system is strongly buffered with strong positive ice and dust feedbacks prevailing at colder temperatures, and strong negative convection/evaporation feedbacks prevailing in warmer times?

Feedback Factor (FF) is defined as the total temperature change at equilibrium for a given forcing divided by the calculated “no feedback” temperature from that forcing.

The term CO2 will be used here to represent all the non-compressing GHGs. (CO2, MH4, N2O, CFCs, HCFs etc.)

Claim: Machine Learning can Detect Anthropogenic Climate Change

E. Worrall, July 8, 2021 in WUWT


According to the big computer we are doomed to suffer ever more damaging weather extremes. But researchers can’t tell us exactly why, because their black box neural net won’t explain its prediction.

As an IT expert who has built commercial AI systems, I find it incredible that the researchers seem so naive as to think their AI machine output has value, without corroborating evidence. They admit they are going to try to understand how their AI works – but in my opinion they have jumped the gun, making big claims on the basis of a black box result.

Consider the following;

….

Gavin’s Falsifiable Science

by W. Eschenbach, Apr 2020 in WUWT


Gavin Schmidt is a computer programmer with the Goddard Institute of Space Sciences (GISS) and a noted climate alarmist. He has a Ph.D. in applied mathematics. He’s put together a twitter threadcontaining what he sees as some important points of the “testable, falsifiable science that supports a human cause of recent trends in global mean temperature”. He says that the slight ongoing rise in temperature is due to the increase in carbon dioxide (CO2) and other so-called “greenhouse gases”. For simplicity, I’ll call this the “CO2 Roolz Temperature” theory of climate. We’ve discussed Dr. Schmidt’s ideas before here on WUWT.

Now, Gavin and I have a bit of history. We first started corresponding by way of a climate mailing list moderated by Timo Hameraanta back around the turn of the century, before Facebook and Twitter.

The interesting part of our interaction was what convinced me that he was a lousy programmer. I asked him about his program, the GISS Global Climate Model. I was interested in how his model made sure that energy was conserved. I asked what happened at the end of each model timestep to verify that energy was neither created nor destroyed.

He said what I knew from my own experience in writing iterative models, that there is always some slight imbalance in energy from the beginning to the end of the timestep. If nothing else, the discrete digital nature of each calculation assures that there with be slight roundoff errors. If these are left uncorrected they can easily accumulate and bring the model down.

He said the way that the GISS model handled that imbalance was to take the excess or the shortage of energy and sprinkle it evenly over the entire planet.

Now, that seemed reasonable for trivial amounts of imbalance coming from digitization. But what if it were larger, and it arose from some problem with their calculations? What then?

So I asked him how large that energy imbalance typically was … and to my astonishment, he said he didn’t know.

Amazed, I asked if he had some computer version of a “Murphy Gauge” on the excess energy. A “Murphy Gauge” (below) is a gauge that allows for Murphy’s Law by letting you set an alarm if the variable goes outside of the expected range … which of course it will, Murphy says so. On the computer, the equivalent would be something in his model that would warn him if the excess or shortage of energy exceeded some set amount.

Yet Another Model-Based Claim Of Anthropogenic Climate Forcing Collapses

by K. Richard, Feb 25 2021 in NoTricksZone


High-resolution climate models have projected a “decline of the Atlantic Meridional Overturning Circulation (AMOC) under the influence of anthropogenic warming” for decades (Lobelle et al., 2020). New research that assesses changes in the deeper layers of the ocean (instead of “ignoring” these layers like past models have) shows instead that the AMOC hasn’t declined for over 30 years.

The North Atlantic has been rapidly cooling in recent decades (Bryden et al., 2020, Fröb et al., 2019). A cooling of “more than 2°C” in just 8 years (2008-2016) and a cooling rate of -0.78°C per decade between 2004 and 2017 has been reported for nearly the entire ocean region just south of Iceland. The cooling persists year-round and extends from the “surface down to 800 m depth”

CMIP6 and AR6, a preview

by Andy May, Feb 11, 2021 in WUWT


The new IPCC report, abbreviated “AR6,” is due to come out between April 2021 (the Physical Science Basis) and June of 2022 (the Synthesis Report). I’ve purchased some very strong hip waders to prepare for the events. For those who don’t already know, sturdy hip waders are required when wading into sewage. I’ve also taken a quick look at the CMIP6 model output that has been posted to the KNMI Climate Explorer to date. I thought I’d share some of what I found.

Meet The Team Shaking Up Climate Models

by C. Rotter, Jan 26, 2021 in WUWT


A new team tries a new approach to Climate Modeling using AI and machine learning. Time will tell if a positive effort or extremely complicated exercise in curve fitting. Their goal is regional scale predictive models useful for planning. Few admit publicly that these do not exist today despite thousands of “studies” using downscaled GCM’s.

“There are some things where there are very robust results and other things where those results are not so robust,” says Gavin Schmidt, who heads NASA’s respected climate modeling program at the Goddard Institute for Space Studies. But the variances push skeptics to dismiss the whole field.

“There’s enough stuff out there that people can sort of cherry-pick to support their preconceptions,” says Dr. Hausfather. “Climate skeptics … were arguing that climate models always predict too much warming.” After studying models done in the past 50 years, Dr. Hausfather says, “it turns out they did remarkably well.”

But climate modelers acknowledge accuracy must improve in order to plot a way through the climate crisis. Now, a team of climatologists, oceanographers, and computer scientists on the East and West U.S. coasts have launched a bold race to do just that.

They have gathered some of the brightest experts from around the world to start to build a new, modern climate model. They hope to corral the vast flow of data from sensors in space, on land, and in the ocean, and enlist “machine learning,” a kind of artificial intelligence, to bring their model alive and provide new insight into what many believe is the most pressing threat facing the planet.

Their goal is accurate climate predictions that can tell local policymakers, builders, and planners what changes to expect by when, with the kind of numerical likelihood that weather forecasters now use to describe, say, a 70% chance of rain.

Failing Computer Models

by P. Homewood ,Jan 21, 2021 in NotaLotofPeopleKnowThat


If anybody tries to tell you that the computer models are accurately predicting global warming, show them this:

http://www.remss.com/research/climate/#:~:text=The%20RSS%20merged%20lower%20stratospheric%20temperature%20data%20product,in%20well-mixed%20greenhouse%20gases%20causes%20by%20human%20activity.

It comes from RSS, who monitor atmospheric temperatures via satellite observation. They are ardent warmists, and here us what they have to say:

….