Source: 2015 global temperatures are right in line with climate model predictions | John Abraham | Environment | The Guardian . . . I communicated with NASA GISS director Dr. Gavin Schmidt, who provided the following data. The graph shows the latest computer model simulations (from the CMIP project), which were used as input to the IPCC, along with five different temperature datasets. The comparison to be made is of the heavy dashed line (annotated in the graph just below the solid black line) and the colored lines. The heavy dashed line is the average predicted temperature including updated influences from a decrease in solar energy, human emitted heat-reflecting particles, and volcanic effects. The dashed line is slightly above the colored markers in recent years, but the results are quite close. Furthermore, this year’s temperature to date is running hotter than 2014. To date, 2015 is almost exactly at the predicted mean value from the models. Importantly, the measured temperatures are well within the spread of the model predictions. Bob Wilson
Not to pick a nit, but Mr. Schmidt is responsible for one of the most highly sensitive and most far off models from what is actually occurring. If a student does an experiment poorly, then is asked to grade himself and he gives himself an A, I would not think that grade should be greeted with much credibility. Now there is a rather large group at IPCC, and here is what they said at the 4th assessment. Projections of Future Changes in Climate - AR4 WGI Summary for Policymakers It should be noted that these models take ghg as an imput and scenarios of how much ghg were made. The actual ghg was at the high end. How much have we warmed since 2005? http://www.climatechange2013.org/images/report/WG1AR5_SPM_FINAL.pdf Not directly answered by not much, as 2005 and 2014 are in a statistical dead heat for warmest year. Its still warming when you remove natural variation, but close to the levels the models have predicted. The 5th assessment as I quoted give a possible reason why the models may have done poorly. Natural variability, and perhaps el nino fooled them. Since these models are back tested they do well with historical data, but do not do well predicting change when given ghg scenarios. Mr. Schmidt's explanation on why his model is so far off lately has been that we would have been cooling if not for the additional ghg. I don't see a scientific basis for this explanation, but ... if he can use it to create a better model that actually predicts future change, I am open to it. As it is these models have not done well and .... need to be improved. I look forward to see how the changes in the batch selected by the 5th assessment do. "Global Warming Has Stopped"? How to Fool People Using "Cherry-Picked" Climate Data Since we know weather, not climate was responsible for the spike in 1998, I am dumbfounded why someone given the 5th assessment, would plot a projected point on that graph for 2015, which for all we know could be an outlier like 1998. This is very poor science to try to pretend bad models are good, instead of learning from errors to correct the models. Maybe we are only warming at around 0.14 degrees per decade, bellow the 0.15 lower bound of the 1990 models. Does that mean the sensitivity is still high perhaps 4.5, but we would be cooling without ghg? I doubt it, but I need to see the model and future data.
So the first versions of the GISS model wasn't perfect and they are going back to fix it . . . this is a bad thing? If this were a strict math problem, something that entails a 'proof' like Pythagorean Theorem, I'd agree. But my experience with models, especially Monte Carlo models, is they are an approximation and only as good their initial assumptions. As long as we don't abandon them because they are not perfect, they can be refined and made better. If someone chooses to ignore the models because they are not 'perfect', fine by me. Global warming doesn't care what individual humans choose to think as those God given laws work on their own time scale. Haruspex was a well respected way to predict the future in the past. Actually El Nino and La Nina are starting to interest me as I start to look at the ocean currents. The reason why: Sea Surface Temperature anomaly: Current Sea Surface Temperature: We see the signature of El Nino in the anomaly chart but it is not evident in the temperature chart. This suggests there is normally an upwelling of a cold current off the west coast of S. America that in an El Nino year is suppressed. This begs the question, why? That El Nino and La Nina cover a fairly large area and time periods measured in months they begin to move it out of what one might call strictly weather. Monsoons the same. Bob Wilson
No not at all, but we probably could do at least as good just extrapolating historic rise, which is a pretty bad model since we are trying to model what the climate will do based on or knowledge of ghg and solar radiation. So absolutely, it is an oportunity to be fixed. I just hated the faked charts, that didn't feed in actual ghg, then the really cherry picked misinformation point of projected 2015 temperature so that .... the models looked a lot better than any objective reading would do. Sure, absolutely. I know people working hard on the climate models. There is a lot of work to do. We shouldn't think that they have done a good job so far. If I run a simulation on a system to win at craps in vegas, and I get down $1000, I probably need to look again at some assumptions, and not say, well I know I could lose a grand, the system is working good enough. I didn't say that. I said that article was to use a toyota phrase - "fueled by bull shit". Take your model, feed in actual ghg (monaloa would be my choice), feed in solar radiation, volcanoes, etc that actually occured, then compare that to your metric (yes the article talked about switching metrics, without saying why the new metric would be better, only that it made the bad models look better. Then fix your model. That would be a good article. I hate it when we use statistics in a chart to lie. Put in cherry picked data, pretend that the low ghg scenario happened, etc. Yes lots of good modeling questions to ask about ENSO, AMO, PDO, energy used to melt ice, etc that can be fed into future models. Here is my gutless naive model, for 20 years from now - Average sensitivity from 1880 if it was all ghg would be 1.55, but IPCC has mid estimate of 3, and alley had historical correlation of 2.8. I'm going to hedge and say 2.5 as I don't think all the heat is in from historic ghg. My model check back 20 years from now - if enso signal is removed with a 11 year moving average will be 2009 (latest date for moving average for ghg and heat, will be 2.5 x log base 2 (ppm CO2 2030/ 387 ppm (2009 data)). I won't try to cherry pick 1998 or 2015 or anything else, I'll do straight moving average 2024-2034. Let it play out, and for those trying to say there is no natural variation or its small, let the model it and compare results. I'm happy to go head to head with Gavin Schmidt here, since his models have done poorly in the past, despite of his attempt to claim he could use single years. My guess is 435 ppm but again this should be corrected for actual. That puts my anomaly at 1.24 degrees C for 2029, using giss data, which has 2014 at 0.89 degrees. 2009 11 year mean is 0.817 with a standard deviation of 0.082. That standard deviation is lower than historical levels. To check how well I did we need to recalculate in 2034 using real levels of ghg and adjust for any volcanic eruptions.
I gave a guestimate there as well, but 20 years is more in line with IPCC rejiggering of models. I avoid natural variation in a much different way than Mr. schmidt. I don't pretend I understand it, so I need to blur it, assuming one half solar cycle blurs. For IPCC models they either need to admit its potentially large and unknown, or actually do the work and model the variation. This trickery of pretending low estimates and cherry picked years is not scientifically satisfying. I would love it if they could predict individual years. None of the IPCC models had the pause, but I have blured the pause away, by removing much of the natural variation by averaging over a long enough time period.
f Like the graph from bob's telegraph article, you chart is clearly misleading. It seems to have models from 1980? but the first ipcc models are from 1990. You also don't give a source. I don't think we are using any of these 1980 models anylonger, so what is the point of graphing them? If you are plotting versus reality, again same criticism of the guardian chart, why end it in 2012 (in guardians why guess 2015) why not use 2014. Each model is based on a dataset they should be judged according to that data set, so you need multiple charts if multiple distinct answers are tried.
Heres a better one. http://www.globalwarming.org/wp-content/uploads/2014/12/Christy-Models-vs-Observations-1979-2014-highlighting-U.S.-Model-Average.jpg
Funny thing about science and engineering, we often learn more from the errors than the successes. For example, microwave noise in a Bell lab that turned out to be background radiation from the Big Bang and a Nobel prize: The accidental discovery of cosmic microwave background radiation in 1965 is a major development in modern physical cosmology. Although predicted by earlier theories, it was first found accidentally by Arno Penziasand Robert Woodrow Wilson as they experimented with the Holmdel Horn Antenna. The discovery was important evidence for a hot early Universe (big bang theory) and was evidence against the rival steady state theory. In 1978, Penzias and Wilson were awarded the Nobel Prize for Physics for their joint discovery. Dealing with God's dice such as random forcing functions like volcanoes and solar irradiance, these are opportunities to learn something not known before. For example: 1997-98 El Nino - the data and duration was widely reported as the forcing function. 2010 Icelanding volcano - a distinct even corresponding to the Northern Hemisphere yet others claimed it was La Nina. Fortunately, we have the Unisys SST anomaly records for this time period. I went back and found: May 2, 2010 - the Icelandic volcano effect is evident east of Iceland; La Nina insignificant; curious mid-Atlantic cold gyre no one mentioned Jun 6, 2010 - Icelandic volcano effect evident; La Nina insignificant; large mid-Atlantic cold gyre July 4, 2010 - Icelandic volcano effect on SST fading; La Nina forming but weak; mid-Atlantic cold gyre still evident July 11, 2010 - volcano effect continues to fade; La Nina evident but weak; mid-Atlantic cold gyre There are anomalies, forcing functions, that we can now observe in the satellite data that were not evident before. That huge mid-Atlantic, cold water gyre in the same time frame begins to beg the question about what it did in the 2010-2011 temperature dip. These previously unobserved forcing functions may not yet be understood but eventually they will be understandable. The models will improve as will our understanding of how to incorporate the forcing functions. I'm used to seeing science and engineering improving like Godwin's Law, we gain better understandings. Having seen computers from analog through card-reader computers through the wide range available today, I expect to see improvements and look forward to them. So too, I don't worry about earlier ones nor dwell on the past. I understand that is the last refuge of those who don't want to 'look through the telescope' but really can't be bothered. Someone else's willing ignorance is not as important as making sure I have the facts and data. Going back to that temperature dip 2010-2011, this would not be the first time independent effects have occurred in the same time frame and led to an amplified result. Intermittent problems are the most difficult to diagnose because there are often more than one leading to the same symptom. In this case, the temperature dip. It just means more data to examine and fortunately, we have access to the data. Bob Wilson
Update @12 how have UAH and RSS gone since 2012? Very easy for you to add eh? No reason not to eh? Maybe we will have a very strong El Nino this time; the Pacific has an angry look. UAH and RSS are very sensitive to El Nino. I think this is a very good reason why you'd not want to extend the graph. What do you say?
There is another atmospheric data source: Integrated Global Radiosonde Archive (IGRA) | National Centers for Environmental Information (NCEI) formerly known as National Climatic Data Center (NCDC) The Integrated Global Radiosonde Archive (IGRA) consists of radiosonde and pilot balloon observations at over 1,500 globally distributed stations. Observations are available for standard, surface, tropopause, and significant levels. Variables include pressure, temperature, geopotential height, dew point depression, wind direction, and wind speed. The period of record varies from station to station, with many extending from 1970 to present. Daily updates of station records are available online at no charge. Unlike the UAH/RSS data based upon microwave signal analysis, these are direct, balloon borne, measurements. The world wide coverage means we can group them by latitude and averaged over ascending pressure altitude. This means we'll be able to look at tropical, temperate, and some polar temperature trends overlapping UAH/RSS. Non-trivial, it means we'll have an independent metric. Bob Wilson