The theory goes that over time CO2 increases resulting in an increase in temperature, put another way, temperature is a function of CO2, or T=f(CO2). This model, however, is deeply flawed and demonstrates a disturbing ignorance of science, modeling, and the physics behind the greenhouse gas effect.
By convention, climate is usually defined as the average of meteorological parameters over a period of 30 years. How can we use the available temperature data, intended for weather monitoring and forecasting, to characterize climate? The approach currently used is to calculate the arithmetic mean for an arbitrary base period, and subtract modern temperatures (either individual temperatures or averages) to determine what is called an anomaly. However, just what does it mean to collect all the temperature data and calculate the mean?
There is as yet no observational evidence that climate sensitivity increases with time in the real climate system – although this cannot be ruled out – nor is it fully understood why it increases in most AOGCMs. In any event, even if real-world climate sensitivity does increase with time, in the longer run other factors that are not reflected in ECS, such as melting ice sheets, are probably more important. Therefore, while time-varying climate sensitivity is of considerable interest from a theoretical point of view, for practical purposes its influence is likely to be very modest.
The Arctic is the most sensitive place to man-made emissions on Earth, which is why it has barely warmed since 1944? Well, it makes sense if CO2 is largely irrelevant. Humans have made 90% of all their CO2 in the last 70 years and nothing much happened in the place where it was supposed to hurt the most.
Looking at data objectively, it is pretty clear that there is little relationship between weather/climate and the rising CO2 concentrations in the atmosphere, as the global warming pause between 1997-2016 shows –
La géologie, une science plus que passionnante … et diverse