Archives par mot-clé : Mathematics

Tendency, Convenient Mistakes, and the Importance of Physical Reasoning.

by Pat Frank, March 1, 2020 in WUWT


Last February 7, statistician Richard Booth, Ph.D. (hereinafter, Rich) posted a very long critique titled, What do you mean by “mean”: an essay on black boxes, emulators, and uncertainty” which is very critical of the GCM air temperature projection emulator in my paper. He was also very critical of the notion of predictive uncertainty itself.

This post critically assesses his criticism.

An aside before the main topic. In his critique, Rich made many of the same mistakes in physical error analysis as do climate modelers. I have described the incompetence of that guild at WUWT here and here.

Rich and climate modelers both describe the probability distribution of the output of a model of unknown physical competence and accuracy, as being identical to physical error and predictive reliability.

Their view is wrong.

Unknown physical competence and accuracy describes the current state of climate models (at least until recently. See also Anagnostopoulos, et al. (2010), Lindzen & Choi (2011), Zanchettin, et al., (2017), and Loehle, (2018)).

GCM climate hindcasts are not tests of accuracy, because GCMs are tuned to reproduce hindcast targets. For example, here, here, and here. Tests of GCMs against a past climate that they were tuned to reproduce is no indication of physical competence.

When a model is of unknown competence in physical accuracy, the statistical dispersion of its projective output cannot be a measure of physical error or of predictive reliability.

Ignorance of this problem entails the very basic scientific mistake that climate modelers evidently strongly embrace and that appears repeatedly in Rich’s essay. It reduces both contemporary climate modeling and Rich’s essay to scientific vacancy.

The correspondence of Rich’s work with that of climate modelers reiterates something I realized after much immersion in published climatology literature — that climate modeling is an exercise in statistical speculation. Papers on climate modeling are almost entirely statistical conjectures. Climate modeling plays with physical parameters but is not a branch of physics.

I believe this circumstance refutes the American Statistical Society’s statement that more statisticians should enter climatology. Climatology doesn’t need more statisticians because it already has far too many: the climate modelers who pretend at science. Consensus climatologists play at scienceness and can’t discern the difference between that and the real thing.

Climatology needs more scientists. Evidence suggests many of the good ones previously resident have been caused to flee.

MATH ERROR: Scientists Admit ‘Mistakes’ Led To Alarming Results In Major Global Warming Study

by M. Bastasch, November 14, 2018 in WUWT/DailyCaller


  • Scientists behind a headline-grabbing climate study admitted they “really muffed” their paper.

  • Their study claimed to find 60 percent more warming in the oceans, but that was based on math errors.

  • The errors were initially spotted by scientist Nic Lewis, who called them “serious (but surely inadvertent) errors.”

The scientists behind a headline-grabbing global warming study did something that seems all too rare these days — they admitted to making mistakes and thanked the researcher, a global warming skeptic, who pointed them out.

“When we were confronted with his insight it became immediately clear there was an issue there,” study co-author Ralph Keeling told The San Diego Union-Tribune on Tuesday.

Their study, published in October, used a new method of measuring ocean heat uptake and found the oceans had absorbed 60 more heat than previously thought. Many news outlets relayed the findings, but independent scientist Nic Lewis quickly found problems with the study.

Keeling, a scientist at the Scripps Institution of Oceanography, owned up to the mistake and thanked Lewis for finding it. Keeling and his co-authors submitted a correction to the journal Nature. (RELATED: Headline-Grabbing Global Warming Study Suffers From A Major Math Error)

Daily Averages? Not So Fast…

by Kip Hansen, October 2, 2018 in WUWT


In the comment section of my most recent essay concerning GAST (Global Average Surface Temperature) anomalies (and why it is a method  for Climate Science to trick itself) — it was brought up [again] that what Climate Science uses for the Daily Average temperature from any weather station is not, as we would have thought, the average of the temperatures recorded for the day (all recorded temperatures added to one another divided by the number of measurements) but are, instead, the Daily Maximum Temperature (Tmax) plus the Daily Low Temperature (Tmin) added and divided by two.  It can be written out as (Tmax + Tmin)/2.

Anyone versed in the various forms of averages will recognize the latter is actually the median of  Tmax and Tmin — the midpoint between the two …

Reconstructing a Temperature History Using Complete and Partial Data

by Mark Fife, April 19, 2018 in WUWT


In today’s post I am going to go over how I went about creating a reconstruction of the history of temperature from the GHCN data sets using a variable number of stations reporting each year for the years of 1900 to 2011. Before I go into the details of that reconstruction, let me cover how I went about discarding some alternative methods.

I decided to create a test case for reconstruction methods by picking five random, complete station records. I then deleted a portion of one of those records. I mimicked actual record conditions within the GHCN data so my testing would be realistic. In different trials I deleted all but the last 20 years, all but the first 20 years or some number of years in the middle. I tried normalizing each station to its own average and averaging the anomalies. I tried averaging the four complete stations, then normalizing the fourth station by its average distance from the main average. In all cases when I plotted the reconstruction against the true average the errors were quite large.normalizing the fourth station by its average distance from the main average. In all cases when I plotted the reconstruction against the true average the errors were quite large.

Uh oh– analysis of GHCN climate stations shows there is no statistically significant warming – or cooling

by Mark Fife, April1, 2018 in WUWT


This is my eighth post in this series where I am examining long term temperature records for the period 1900 to 2011 contained in the Global Historical Climatology Network daily temperature records. I would encourage anyone to start at the first post and go forward. However, this post will serve as a standalone document. In this post I have taken my experience in exploring the history of Australia and applied it forward to cover North America and Europe.

The way to view this study is literally a statistic-based survey of the data. Meaning I have created a statistic to quantify, rank, and categorize the data. My statistic is very straight forward; it is simply the net change in temperature between the first and last 10 years of 1900 through 2011 for each station.

Tools to Spot the Spots

by Willy Eschenbach, March 30, 2018 in WUWT


People have asked about the tools that I use to look for any signature of sunspot-related solar variations in climate datasets. They’ve wondered whether these tools are up to the task. What I use are periodograms and Complete Ensemble Empirical Mode Decomposition (CEEMD). Periodograms show how much strength there is at various cycle lengths (periods) in a given signal. CEEMD decomposes a signal into underlying simpler signals.

Now, a lot of folks seem to think that they can determine whether a climate dataset is related to the sunspot cycle simply by looking at a graph. So, here’s a test of that ability. Below is recent sunspot data, along with four datasets A, B, C, and D. The question is, which of the four datasets (if any) is affected by sunspots?

DO-IT-YOURSELF TEMPERATURE RECONSTRUCTION

by M.  Chase, February 2, 2018 in WUWT


This article describes a simple but effective procedure for regional average temperature reconstruction, a procedure that you, yes you dear reader, can fully understand and, if you have some elementary programming skills, can implement.

To aid readability, and to avoid the risk of getting it wrong, no attempt is made in the article to give proper attribution to previous work of others, but a link is provided at the end to where a list of references can be found.

(…)

Durable Original Measurement Uncertainty

by Kip Hansen, October 14, 2017 in WUWT


Temperature and Water Level (MSL) are two hot topic measurements being widely bandied about and vast sums of money are being invested in research to determine whether, on a global scale, these physical quantities — Global Average Temperature and Global Mean Sea Level — are changing, and if changing, at what magnitude and at what rate. The Global Averages of these ever-changing, continuous variables are being said to be calculated to extremely precise levels — hundredths of a degree for temperature and millimeters for Global Sea Level — and minute changes on those scales are claimed to be significant and important.

Statistical link between external climate forcings and modes of ocean variability

by Abdul Malik et al., July 31, 2017, Climate Dynamics, Springer


In this study we investigate statistical link between external climate forcings and modes of ocean variability on inter-annual (3-year) to centennial (100-year) timescales using de-trended semi-partial-cross-correlation analysis technique. To investigate this link we employ observations (AD 1854–1999), climate proxies (AD 1600–1999), and coupled Atmosphere-Ocean-Chemistry Climate Model simulations with SOCOL-MPIOM (AD 1600–1999). We find robust statistical evidence that Atlantic multi-decadal oscillation (AMO) has intrinsic positive correlation with solar activity in all datasets employed. The strength of the relationship between AMO and solar activity is modulated by volcanic eruptions and complex interaction among modes of ocean variability.

On climate change, the uncertainties multiply— literally.

by Michael Bernstam, July 3, 2017 in GWPF


The following four stipulations must each be highly probable: 

1. Global warming will accumulate at 0.12 degrees Celsius or higher per decade.

2. It is anthropogenic, due largely to carbon dioxide emissions.

3. The net effect is harmful to human well-being in the long run.

4. Preventive measures are efficient, that is, feasible at the costs not exceeding the benefits.

But even if the probability of each of these stipulations is as high as 85 percent, their compound probability is as low as 50 percent. This makes a decision to act or not to act on climate change equivalent to flipping a coin.

The Laws of Averages: Part 2, A Beam of Darkness

by Kip Hansen, June 19, 2017 in WUWT


As both the word and the concept “average” are subject to a great deal of confusion and misunderstanding in the general public and both word and concept have seen an overwhelming amount of “loose usage” even in scientific circles, not excluding peer-reviewed journal articles and scientific press releases,  I gave a refresher on Averages in Part 1 of this series.  If your maths or science background is near the great American average, I suggest you take a quick look at the primer in Part 1 before reading here.

The Meaning and Utility of Averages as it Applies to Climate

by Clyde Spencer, April 23, 2017


By convention, climate is usually defined as the average of meteorological parameters over a period of 30 years. How can we use the available temperature data, intended for weather monitoring and forecasting, to characterize climate? The approach currently used is to calculate the arithmetic mean for an arbitrary base period, and subtract modern temperatures (either individual temperatures or averages) to determine what is called an anomaly. However, just what does it mean to collect all the temperature data and calculate the mean?

Are Claimed Global Record-Temperatures Valid?

by Clyde Spencer, April 12, 2017


In summary, there are numerous data handling practices, which climatologists generally ignore, that seriously compromise the veracity of the claims of record average-temperatures, and are reflective of poor science. The statistical significance of temperature differences with 3 or even 2 significant figures to the right of the decimal point is highly questionable. One is not justified in using the approach of calculating the Standard Error of the Mean to improve precision, by removing random errors, because there is no fixed, single value that random errors cluster about. The global average is a hypothetical construct that doesn’t exist in Nature. Instead, temperatures are changing, creating variable, systematic-like errors. Real scientists are concerned about the magnitude and origin of the inevitable errors in their measurements.

Also : Perspective Needed; Time to Identify Variations in Natural Climate Data that Exceed the Claimed Human CO2 Warming Effect

The Logarithmic Effect of Carbon Dioxide

by David Archibald, March 8, 2010


The greenhouse gasses keep the Earth 30° C warmer than it would otherwise be without them in the atmosphere, so instead of the average surface temperature being -15° C, it is 15° C. Carbon dioxide contributes 10% of the effect so that is 3° C. The pre-industrial level of carbon dioxide in the atmosphere was 280 ppm. So roughly, if the heating effect was a linear relationship, each 100 ppm contributes 1° C. With the atmospheric concentration rising by 2 ppm annually, it would go up by 100 ppm every 50 years and we would all fry as per the IPCC predictions.

But the relationship isn’t linear, it is logarithmic. In 2006, Willis Eschenbach posted this graph on Climate Audit showing the logarithmic heating effect of carbon dioxide relative to atmospheric concentration